Good point, I should at least explain why I don’t think the particular biases Dmytry listed apply to me (or at least probably applies to a much lesser extent than his “intended audience”).
Innate fears—Explained here why I’m not too afraid about AI risks.
Political orientation—Used to be libertarian, now not very political. Don’t see how either would bias me on AI risks.
Religion—Never had one since my parents are atheists.
Deeply repressed religious beliefs—See above.
Xenophobia—I don’t detect much xenophobia in myself when I think about AIs. (Is there a better test for this?)
Fiction—Not disclaiming this one
Wishful thinking—This would only bias me against thinking AI risks are high, no?
Sunk cost fallacy—I guess I have some sunken costs here (time spent thinking about Singularity strategies) but it seems minimal and only happened after I already started worrying about UFAI.
I should at least explain why I don’t think the particular biases Dmytry listed apply to me...
I don’t think you are biased. It got sort of a taboo to claim that one is less biased than other people within rationality circles. I doubt many people on Less Wrong are easily prone to most of the usual biases. I think it would be more important to examine possible new kinds of artificial biases like stating “politics is the mind killer” as if it was some sort of incantation or confirmation that one is part of the rationality community, to name a negligible example.
A more realistic bias when it comes to AI risks would be the question how much of your worries are socially influenced versus the result of personal insight, real worries about your future and true feelings of moral obligation. In other words, how much of it is based on the basic idea that “if you are a true rationalist you have to worry about risks from AI” versus “it is rational to worry about risks from AI” (Note: I am not trying to claim anything here. Just trying to improve Dmytry’s list of biases).
Think about it this way. Imagine a counterfactual world where you studied AI and received money to study reinforcement learning or some other related subject. Further imagine that SI/LW would not exist in this world and also no similar community that treats ‘rationality’ in the same way. Do you think that you would worry a lot about risks from AI?
I started worrying about AI risks (or rather the risks of a bad Singularity in general) well before SI/LW. Here’s a 1997 post:
There is a very realistic
chance that the Singularity may turn out to be undesirable to many of us.
Perhaps it will be unstable and destroy all closely-coupled intelligence.
Or maybe the only entity that emerges from it will have the “personality”
of the Blight.
You can also see here that I was strongly influenced by Vernor Vinge’s novels. I’d like to think that if I had read the same ideas in a dry academic paper, I would have been similarly affected, but I’m not sure how to check that, or if I wouldn’t have been, that would be more rational.
I read that box as meaning “the list of cognitive biases” and took the listing of a few as meaning “don’t just go ‘oh yeah, cognitive biases, I know about those so I don’t need to worry about them any more’, actually think about them.”
Full points for having thought about them, definitely—but explicitly considering yourself immune to cognitive biases strikes me as … asking for trouble.
Innate fears—Explained here why I’m not too afraid about AI risks.
You read fiction, some of it is made to play on fears, i.e. to create more fearsome scenarios. The ratio between fearsome, and nice scenarios, is set by market.
Political orientation—Used to be libertarian, now not very political. Don’t see how either would bias me on AI risks.
You assume zero bias? See, the issue is that I don’t think you have a whole lot of signal getting through the graph of unknown blocks. Consequently, any residual biases could win the battle.
Religion—Never had one since my parents are atheists.
Maybe a small bias considering that the society is full of religious people.
Xenophobia—I don’t detect much xenophobia in myself when I think about AIs. (Is there a better test for this?)
I didn’t notice your ‘we’ including the AI in the origin of that thread, so there is at least a little of this bias.
Wishful thinking—This would only bias me against thinking AI risks are high, no?
Yes. I am not listing only the biases that are for the AI risk. Fiction for instance can bias both pro and against, depending to choice of fiction.
Sunk cost fallacy—I guess I have some sunken costs here (time spent thinking about Singularity strategies) but it seems minimal and only happened after I already started worrying about UFAI.′
But how small it is compared to the signal?
It is not about absolute values of the biases, it is about relative values of the biases against the reasonable signal you could get here.
On the assumption that you’re a human, I don’t feel the burden of proof is on me to demonstrate that you are cognitively similar to humans in general.
Good point, I should at least explain why I don’t think the particular biases Dmytry listed apply to me (or at least probably applies to a much lesser extent than his “intended audience”).
Innate fears—Explained here why I’m not too afraid about AI risks.
Political orientation—Used to be libertarian, now not very political. Don’t see how either would bias me on AI risks.
Religion—Never had one since my parents are atheists.
Deeply repressed religious beliefs—See above.
Xenophobia—I don’t detect much xenophobia in myself when I think about AIs. (Is there a better test for this?)
Fiction—Not disclaiming this one
Wishful thinking—This would only bias me against thinking AI risks are high, no?
Sunk cost fallacy—I guess I have some sunken costs here (time spent thinking about Singularity strategies) but it seems minimal and only happened after I already started worrying about UFAI.
I don’t think you are biased. It got sort of a taboo to claim that one is less biased than other people within rationality circles. I doubt many people on Less Wrong are easily prone to most of the usual biases. I think it would be more important to examine possible new kinds of artificial biases like stating “politics is the mind killer” as if it was some sort of incantation or confirmation that one is part of the rationality community, to name a negligible example.
A more realistic bias when it comes to AI risks would be the question how much of your worries are socially influenced versus the result of personal insight, real worries about your future and true feelings of moral obligation. In other words, how much of it is based on the basic idea that “if you are a true rationalist you have to worry about risks from AI” versus “it is rational to worry about risks from AI” (Note: I am not trying to claim anything here. Just trying to improve Dmytry’s list of biases).
Think about it this way. Imagine a counterfactual world where you studied AI and received money to study reinforcement learning or some other related subject. Further imagine that SI/LW would not exist in this world and also no similar community that treats ‘rationality’ in the same way. Do you think that you would worry a lot about risks from AI?
I started worrying about AI risks (or rather the risks of a bad Singularity in general) well before SI/LW. Here’s a 1997 post:
You can also see here that I was strongly influenced by Vernor Vinge’s novels. I’d like to think that if I had read the same ideas in a dry academic paper, I would have been similarly affected, but I’m not sure how to check that, or if I wouldn’t have been, that would be more rational.
I read that box as meaning “the list of cognitive biases” and took the listing of a few as meaning “don’t just go ‘oh yeah, cognitive biases, I know about those so I don’t need to worry about them any more’, actually think about them.”
Full points for having thought about them, definitely—but explicitly considering yourself immune to cognitive biases strikes me as … asking for trouble.
Innate fears—Explained here why I’m not too afraid about AI risks.
You read fiction, some of it is made to play on fears, i.e. to create more fearsome scenarios. The ratio between fearsome, and nice scenarios, is set by market.
Political orientation—Used to be libertarian, now not very political. Don’t see how either would bias me on AI risks.
You assume zero bias? See, the issue is that I don’t think you have a whole lot of signal getting through the graph of unknown blocks. Consequently, any residual biases could win the battle.
Religion—Never had one since my parents are atheists.
Maybe a small bias considering that the society is full of religious people.
Xenophobia—I don’t detect much xenophobia in myself when I think about AIs. (Is there a better test for this?)
I didn’t notice your ‘we’ including the AI in the origin of that thread, so there is at least a little of this bias.
Wishful thinking—This would only bias me against thinking AI risks are high, no?
Yes. I am not listing only the biases that are for the AI risk. Fiction for instance can bias both pro and against, depending to choice of fiction.
Sunk cost fallacy—I guess I have some sunken costs here (time spent thinking about Singularity strategies) but it seems minimal and only happened after I already started worrying about UFAI.′
But how small it is compared to the signal?
It is not about absolute values of the biases, it is about relative values of the biases against the reasonable signal you could get here.