What, if anything, do you think a lesswrong regular who’s read the sequences and all/most of MIRI’s non-technical publications will get out of your book?
Along with the views of EY (which such readers would already know) I present the singularity views of Robin Hanson and Ray Kurzweil, and discuss the intelligence enhancing potential of brain training, smart drugs, and eugenics. My thesis is that there are so many possible paths to super-human intelligence and such incredible military and economic benefits to develop super-human intelligence that unless we destroy our high-tech civilization we will almost certainly develop it.
What, if anything, do you think a lesswrong regular who’s read the sequences and all/most of MIRI’s non-technical publications will get out of your book?
Along with the views of EY (which such readers would already know) I present the singularity views of Robin Hanson and Ray Kurzweil, and discuss the intelligence enhancing potential of brain training, smart drugs, and eugenics. My thesis is that there are so many possible paths to super-human intelligence and such incredible military and economic benefits to develop super-human intelligence that unless we destroy our high-tech civilization we will almost certainly develop it.