(Last I checked, Robin Hanson put probability of hard takeoff at less than 1%. Unfriendly singularity is so bad an outcome that research and discussion about hard takeoff is warranted, of course, but is it not a bit of an overreaction to suggest that this series of articles might be too dangerous to be made available to the public?)
If the probability of hard takeoff was 0.1%, it’s still too high a probability for me to want there to be public discussion of how one might build an AI.
Because the lifespan of galaxies is measured in billions of years, whereas the time-scale of any delays that we could realistically affect would rather be measured in years or decades, the consideration of risk trumps the consideration of opportunity cost. For example, a single percentage point of reduction of existential risks would be worth (from a utilitarian expected utility point-of-view) a delay of over 10 million years.
If the probability of hard takeoff was 0.1%, it’s still too high a probability for me to want there to be public discussion of how one might build an AI.
http://www.nickbostrom.com/astronomical/waste.html