“Finally this sequence of posts is beginning to build to its hysterical climax. It might be difficult to convince us that doomsday probability calculations are more than swag-based-Bayesianism, but the effort will probably be entertaining.”
Hmm. I’ve seen little to indicate that this is going to end up being a discussion of the Doomsday Argument. Still, it would be interesting to see Eliezer’s own view. Everyone seems to have their own opinion as to why its unsound (and I agree that it’s unsound, for my own reasons...)
The last paragraph though is relevant to the view that nanotechnology or AI are potentially dangerous; a view we might want to accept without first creating the technologies 1000 times and seeing what percentage of the time life on Earth is wiped out. But I don’t think this idea hinges on the DA.
“Finally this sequence of posts is beginning to build to its hysterical climax. It might be difficult to convince us that doomsday probability calculations are more than swag-based-Bayesianism, but the effort will probably be entertaining.”
Hmm. I’ve seen little to indicate that this is going to end up being a discussion of the Doomsday Argument. Still, it would be interesting to see Eliezer’s own view. Everyone seems to have their own opinion as to why its unsound (and I agree that it’s unsound, for my own reasons...)
The last paragraph though is relevant to the view that nanotechnology or AI are potentially dangerous; a view we might want to accept without first creating the technologies 1000 times and seeing what percentage of the time life on Earth is wiped out. But I don’t think this idea hinges on the DA.