Nah, I think this post is about a third component of the problem: ensuring that the solution to “what to steer at” that’s actually deployed is pro-humanity. A totalitarian government successfully figuring out how to load its regime’s values into the AGI has by no means failed at figuring out “what to steer at”. They know what they want and how to get it. It’s just that we don’t like the end result.
“Being able to steer at all” is a technical problem of designing AIs, “what to steer at” is a technical problem of precisely translating intuitive human goals into a formal language, and “where is the AI actually steered” is a realpolitiks problem that this post is about.
Nah, I think this post is about a third component of the problem: ensuring that the solution to “what to steer at” that’s actually deployed is pro-humanity. A totalitarian government successfully figuring out how to load its regime’s values into the AGI has by no means failed at figuring out “what to steer at”. They know what they want and how to get it. It’s just that we don’t like the end result.
“Being able to steer at all” is a technical problem of designing AIs, “what to steer at” is a technical problem of precisely translating intuitive human goals into a formal language, and “where is the AI actually steered” is a realpolitiks problem that this post is about.
Ah, yeah that’s right.