http://lesswrong.com/lw/cbs/thoughts_on_the_singularity_institute_si/
Thanks. That really helps. Do you know of any decent arguments suggesting that working on trying to develop safe tool AI (or some other non-AGI AI) would increase existential risk?
http://lesswrong.com/lw/cbs/thoughts_on_the_singularity_institute_si/
Thanks. That really helps. Do you know of any decent arguments suggesting that working on trying to develop safe tool AI (or some other non-AGI AI) would increase existential risk?