I should at least explain why I don’t think the particular biases Dmytry listed apply to me...
I don’t think you are biased. It got sort of a taboo to claim that one is less biased than other people within rationality circles. I doubt many people on Less Wrong are easily prone to most of the usual biases. I think it would be more important to examine possible new kinds of artificial biases like stating “politics is the mind killer” as if it was some sort of incantation or confirmation that one is part of the rationality community, to name a negligible example.
A more realistic bias when it comes to AI risks would be the question how much of your worries are socially influenced versus the result of personal insight, real worries about your future and true feelings of moral obligation. In other words, how much of it is based on the basic idea that “if you are a true rationalist you have to worry about risks from AI” versus “it is rational to worry about risks from AI” (Note: I am not trying to claim anything here. Just trying to improve Dmytry’s list of biases).
Think about it this way. Imagine a counterfactual world where you studied AI and received money to study reinforcement learning or some other related subject. Further imagine that SI/LW would not exist in this world and also no similar community that treats ‘rationality’ in the same way. Do you think that you would worry a lot about risks from AI?
I started worrying about AI risks (or rather the risks of a bad Singularity in general) well before SI/LW. Here’s a 1997 post:
There is a very realistic
chance that the Singularity may turn out to be undesirable to many of us.
Perhaps it will be unstable and destroy all closely-coupled intelligence.
Or maybe the only entity that emerges from it will have the “personality”
of the Blight.
You can also see here that I was strongly influenced by Vernor Vinge’s novels. I’d like to think that if I had read the same ideas in a dry academic paper, I would have been similarly affected, but I’m not sure how to check that, or if I wouldn’t have been, that would be more rational.
I don’t think you are biased. It got sort of a taboo to claim that one is less biased than other people within rationality circles. I doubt many people on Less Wrong are easily prone to most of the usual biases. I think it would be more important to examine possible new kinds of artificial biases like stating “politics is the mind killer” as if it was some sort of incantation or confirmation that one is part of the rationality community, to name a negligible example.
A more realistic bias when it comes to AI risks would be the question how much of your worries are socially influenced versus the result of personal insight, real worries about your future and true feelings of moral obligation. In other words, how much of it is based on the basic idea that “if you are a true rationalist you have to worry about risks from AI” versus “it is rational to worry about risks from AI” (Note: I am not trying to claim anything here. Just trying to improve Dmytry’s list of biases).
Think about it this way. Imagine a counterfactual world where you studied AI and received money to study reinforcement learning or some other related subject. Further imagine that SI/LW would not exist in this world and also no similar community that treats ‘rationality’ in the same way. Do you think that you would worry a lot about risks from AI?
I started worrying about AI risks (or rather the risks of a bad Singularity in general) well before SI/LW. Here’s a 1997 post:
You can also see here that I was strongly influenced by Vernor Vinge’s novels. I’d like to think that if I had read the same ideas in a dry academic paper, I would have been similarly affected, but I’m not sure how to check that, or if I wouldn’t have been, that would be more rational.