Thanks. That really helps. Do you know of any decent arguments suggesting that working on trying to develop safe tool AI (or some other non-AGI AI) would increase existential risk?
Thanks. That really helps. Do you know of any decent arguments suggesting that working on trying to develop safe tool AI (or some other non-AGI AI) would increase existential risk?