For what it’s worth, I disagree with many (if not most) LessWrongers (LessWrongites ? LessWrongoids ?) on the subject of the Singularity. I am far from convinced that the Singularity is even possible in principle, and I am fairly certain that, even if it were possible, it would not occur within my lifetime, or my (hypothetical) children’s lifetimes.
EDIT: added a crucial “not” in the last sentence. Oops.
I also think the singularity is much less likely than most Lesswrongers. Which is quite comforting, because my estimated probability for the singularity is still higher than my estimated probability that the problem of friendly AI is tractable.
Just chiming in here because I think the question about the singularity on the LW survey was not well-designed to capture the opinion of those who don’t think it likely to happen at all, so the median LW perception of the singularity may not be what it appears.
For what it’s worth, I disagree with many (if not most) LessWrongers (LessWrongites ? LessWrongoids ?) on the subject of the Singularity. I am far from convinced that the Singularity is even possible in principle, and I am fairly certain that, even if it were possible, it would not occur within my lifetime, or my (hypothetical) children’s lifetimes.
EDIT: added a crucial “not” in the last sentence. Oops.
I also think the singularity is much less likely than most Lesswrongers. Which is quite comforting, because my estimated probability for the singularity is still higher than my estimated probability that the problem of friendly AI is tractable.
Just chiming in here because I think the question about the singularity on the LW survey was not well-designed to capture the opinion of those who don’t think it likely to happen at all, so the median LW perception of the singularity may not be what it appears.