For what it’s worth, I disagree with many (if not most) LessWrongers (LessWrongites ? LessWrongoids ?) on the subject of the Singularity. I am far from convinced that the Singularity is even possible in principle, and I am fairly certain that, even if it were possible, it would not occur within my lifetime, or my (hypothetical) children’s lifetimes.
EDIT: added a crucial “not” in the last sentence. Oops.
I also think the singularity is much less likely than most Lesswrongers. Which is quite comforting, because my estimated probability for the singularity is still higher than my estimated probability that the problem of friendly AI is tractable.
Just chiming in here because I think the question about the singularity on the LW survey was not well-designed to capture the opinion of those who don’t think it likely to happen at all, so the median LW perception of the singularity may not be what it appears.
Yeah… spending time on Less Wrong helps one in general appreciate how much existential risk there is, especially from technologies, and how little attention is paid to it. Thinking about the Great Filter will just make everything seem even worse.
A runaway AI might wind up being very destructive, but quite probably not wholly destructive.
It seems likely that it would find some of the knowledge humanity has built up over the
millenia useful, regardless of what specific goals it had. In that sense, I think that even if
a paperclip optimizer is built and eats the world, we won’t have been wholly forgotten in
the way we would if, e.g. the sun exploded and vaporized our planet. I don’t find this to be
much comfort, but how comforting or not it is is a matter of personal taste.
Thank you. I’ve just updated on that. I now consider it even more likely that the world will be destroyed within my lifetime.
For what it’s worth, I disagree with many (if not most) LessWrongers (LessWrongites ? LessWrongoids ?) on the subject of the Singularity. I am far from convinced that the Singularity is even possible in principle, and I am fairly certain that, even if it were possible, it would not occur within my lifetime, or my (hypothetical) children’s lifetimes.
EDIT: added a crucial “not” in the last sentence. Oops.
I also think the singularity is much less likely than most Lesswrongers. Which is quite comforting, because my estimated probability for the singularity is still higher than my estimated probability that the problem of friendly AI is tractable.
Just chiming in here because I think the question about the singularity on the LW survey was not well-designed to capture the opinion of those who don’t think it likely to happen at all, so the median LW perception of the singularity may not be what it appears.
Yeah… spending time on Less Wrong helps one in general appreciate how much existential risk there is, especially from technologies, and how little attention is paid to it. Thinking about the Great Filter will just make everything seem even worse.
A runaway AI might wind up being very destructive, but quite probably not wholly destructive. It seems likely that it would find some of the knowledge humanity has built up over the millenia useful, regardless of what specific goals it had. In that sense, I think that even if a paperclip optimizer is built and eats the world, we won’t have been wholly forgotten in the way we would if, e.g. the sun exploded and vaporized our planet. I don’t find this to be much comfort, but how comforting or not it is is a matter of personal taste.