Still, I always had the impression that this line of work focused more on how to build a perfectly rational AGI than on building an aligned one. Can you explain me why that’s inaccurate?
I don’t know what you mean by “perfectly rational AGI”. (Perfect rationality isn’t achievable, rationality-in-general is convergently instrumental, and rationality is insufficient for getting good outcomes. So why would that be the goal?)
I think of the basic case for HRAD this way:
We seem to be pretty confused about a lot of aspects of optimization, reasoning, decision-making, etc. (Embedded Agency is talking about more or less the same set of questions as HRAD, just with subsystem alignment added to the mix.)
If we were less confused, it might be easier to steer toward approaches to AGI that make it easier to do alignment work like ‘understand what cognitive work the system is doing internally’, ‘ensure that none of the system’s compute is being used to solve problems we don’t understand / didn’t intend’, ‘ensure that the amount of quality-adjusted thinking the system is putting into the task at hand is staying within some bound’, etc.
These approaches won’t look like decision theory, but being confused about basic ground-floor things like decision theory is a sign that you’re likely not in an epistemic position to efficiently find such approaches, much like being confused about how/whether chess is computable is a sign that you’re not in a position to efficiently steer toward good chess AI designs.
OK, thanks for the clarifications!
I don’t know what you mean by “perfectly rational AGI”. (Perfect rationality isn’t achievable, rationality-in-general is convergently instrumental, and rationality is insufficient for getting good outcomes. So why would that be the goal?)
I think of the basic case for HRAD this way:
We seem to be pretty confused about a lot of aspects of optimization, reasoning, decision-making, etc. (Embedded Agency is talking about more or less the same set of questions as HRAD, just with subsystem alignment added to the mix.)
If we were less confused, it might be easier to steer toward approaches to AGI that make it easier to do alignment work like ‘understand what cognitive work the system is doing internally’, ‘ensure that none of the system’s compute is being used to solve problems we don’t understand / didn’t intend’, ‘ensure that the amount of quality-adjusted thinking the system is putting into the task at hand is staying within some bound’, etc.
These approaches won’t look like decision theory, but being confused about basic ground-floor things like decision theory is a sign that you’re likely not in an epistemic position to efficiently find such approaches, much like being confused about how/whether chess is computable is a sign that you’re not in a position to efficiently steer toward good chess AI designs.