Are there any decent arguments saying that working on trying to develop safe AGI would increase existential risk? I’ve found none, but I’d like to know because I’m considering developing AGI as a career.
Thanks. That really helps. Do you know of any decent arguments suggesting that working on trying to develop safe tool AI (or some other non-AGI AI) would increase existential risk?
Are there any decent arguments saying that working on trying to develop safe AGI would increase existential risk? I’ve found none, but I’d like to know because I’m considering developing AGI as a career.
Edit: What about AI that’s not AGI?
http://lesswrong.com/lw/cbs/thoughts_on_the_singularity_institute_si/
Thanks. That really helps. Do you know of any decent arguments suggesting that working on trying to develop safe tool AI (or some other non-AGI AI) would increase existential risk?