Yep, I’d say I intuitively agree with all of that, though I’d add that if you want to specify the set of “outcomes” differently from the set of “goals”, then that must mean you’re implicitly defining a mapping from outcomes to goals. One analogy could be that an outcome is like a thermodynamic microstate (in the sense that it’s a complete description of all the features of the universe) while a goal is like a thermodynamic macrostate (in the sense that it’s a complete description of the features of the universe that the system can perceive).
This mapping from outcomes to goals won’t be injective for any real embedded system. But in the unrealistic limit where your system is so capable that it has a “perfect ontology” — i.e., its perception apparatus can resolve every outcome / microstate from any other — then this mapping converges to the identity function, and the system’s set of possible goals converges to its set of possible outcomes. (This is the dualistic case, e.g., AIXI and such. But plausibly, we also should expect a self-improving systems to improve its own perception apparatus such that its effective goal-set becomes finer and finer with each improvement cycle. So even this partition over goals can’t be treated as constant in the general case.)
Ah so I think what you’re saying is that for a given outcome, we can ask whether there is a goal we can give to the system such that it steers towards that outcome. Then, as a system becomes more powerful, the range of outcomes that it can steer towards expands. That seems very reasonable to me, though the question that strikes me as most interesting is: what can be said about the internal structure of physical objects that have power in this sense?
Yep, I’d say I intuitively agree with all of that, though I’d add that if you want to specify the set of “outcomes” differently from the set of “goals”, then that must mean you’re implicitly defining a mapping from outcomes to goals. One analogy could be that an outcome is like a thermodynamic microstate (in the sense that it’s a complete description of all the features of the universe) while a goal is like a thermodynamic macrostate (in the sense that it’s a complete description of the features of the universe that the system can perceive).
This mapping from outcomes to goals won’t be injective for any real embedded system. But in the unrealistic limit where your system is so capable that it has a “perfect ontology” — i.e., its perception apparatus can resolve every outcome / microstate from any other — then this mapping converges to the identity function, and the system’s set of possible goals converges to its set of possible outcomes. (This is the dualistic case, e.g., AIXI and such. But plausibly, we also should expect a self-improving systems to improve its own perception apparatus such that its effective goal-set becomes finer and finer with each improvement cycle. So even this partition over goals can’t be treated as constant in the general case.)
Ah so I think what you’re saying is that for a given outcome, we can ask whether there is a goal we can give to the system such that it steers towards that outcome. Then, as a system becomes more powerful, the range of outcomes that it can steer towards expands. That seems very reasonable to me, though the question that strikes me as most interesting is: what can be said about the internal structure of physical objects that have power in this sense?