I think there should be more effort into researching the limits of controllability for self-improving machines. That aspect of rapid self improvement seems pretty important to me since it’s there regardless of which architecture we use to get to the singularity. If the singularity is dangerous no matter how we get there, or how aligned our first try is, then, [clears throat and raises sign] don’t build AGI?
I think there should be more effort into researching the limits of controllability for self-improving machines. That aspect of rapid self improvement seems pretty important to me since it’s there regardless of which architecture we use to get to the singularity. If the singularity is dangerous no matter how we get there, or how aligned our first try is, then, [clears throat and raises sign] don’t build AGI?