The idea is to make an AGI that actually just wants to help us, rather than an AGI that wants to do something else but is constrained.
I recommend Scott’s Superintelligence FAQ for some basics if you haven’t read it before.
Thanks for answering and pointing out the FAQ Raemon! What Scott describes sounds like a harmonious relationship between humans and AGI. Is that a fair summary?.
The idea is to make an AGI that actually just wants to help us, rather than an AGI that wants to do something else but is constrained.
I recommend Scott’s Superintelligence FAQ for some basics if you haven’t read it before.
Thanks for answering and pointing out the FAQ Raemon! What Scott describes sounds like a harmonious relationship between humans and AGI. Is that a fair summary?.