Selection vs Control as distinguishing different types of “space of possibilities”
Selection as having that space explicitly given & selectable numerous times by the agent
Control as having that space only given in terms of counterfactuals, and the agent can access it only once.
These distinctions correlate with the type of algorithm being used & its internal structure, where Selection uses more search-like process using maps, while Control may just use explicit formula … although it may very well use internal maps to Select on counterfactual outcomes!
In other words, the Selection vs Control may very well be viewed as a different cluster of Analysis. Example:
If we decide to focus our Analysis of “space of possibilities” on eg “Real life outcome,” then a guided missile is always Control.
But if we decide to focus on “space of internal representation of possibilities,” then a guided missle that uses internal map to search on becomes Selection.
Greater emphasis on internal structure—specifically, “maps.”
Maps are capital investment, allowing you to be able to optimize despite not knowing what to exactly optimize for (by compressing info)
I have several thoughts on these framings, but one trouble is the excessive usage of words to represent “clusters” i.e. terms to group a bunch of correlated variables. Selection vs Control, for example, doesn’t have a clear definition/criteria but rather points at a number of correlated things, like internal structure, search, maps, control-like things, etc.
Sure, deconfusing and pointing out clusters is useful because clusters imply correlations and correlations perhaps imply hidden structure + relationships—but I think the costs from cluster-representing-words doing hidden inference is much greater than the benefits, and it would be better to explicitly lay out the features-of-clusters that the one is referring to instead of just using the name of the cluster.
This is similar to the trouble I had with “wrapper-minds,” which is yet another example of a cluster pointing at a bunch of correlated variables, and people using the same term to mean different things.
Anyways, I still feel totally confused about optimization—and while these clusters/frames are useful, I think thinking in terms of them would ensue even more confusion within myself. It’s probably better to take the useful individual parts within the cluster and start deconfusing from the ground-up using those as the building blocks.
There were various notions/frames of optimization floating around, and I tried my best to distill them:
Eliezer’s Measuring Optimization Power on unlikelihood of outcome + agent preference ordering
Alex Flint’s The ground of optimization on robustness of system-as-a-whole evolution
Selection vs Control as distinguishing different types of “space of possibilities”
Selection as having that space explicitly given & selectable numerous times by the agent
Control as having that space only given in terms of counterfactuals, and the agent can access it only once.
These distinctions correlate with the type of algorithm being used & its internal structure, where Selection uses more search-like process using maps, while Control may just use explicit formula … although it may very well use internal maps to Select on counterfactual outcomes!
In other words, the Selection vs Control may very well be viewed as a different cluster of Analysis. Example:
If we decide to focus our Analysis of “space of possibilities” on eg “Real life outcome,” then a guided missile is always Control.
But if we decide to focus on “space of internal representation of possibilities,” then a guided missle that uses internal map to search on becomes Selection.
“Internal Optimization” vs “External Optimization”
Similar to Selection vs Control, but the analysis focuses more on internal structure:
Why? Motivated by the fact that, as with the guided missile example, Control systems can be viewed as Selection systems depending on perspective
… hence, better to focus on internal structures where it’s much less ambiguous.
IO: Internal search + selection
EO: Flint’s definition of “optimizing system”
IO is included in EO, if we assume accurate map-to-environment correspondence.
To me, this doesn’t really get at what the internals of actually-control-like systems look like, which presumably a subset of EO—IO.
Search-in-Territory vs Search-in-Map
Greater emphasis on internal structure—specifically, “maps.”
Maps are capital investment, allowing you to be able to optimize despite not knowing what to exactly optimize for (by compressing info)
I have several thoughts on these framings, but one trouble is the excessive usage of words to represent “clusters” i.e. terms to group a bunch of correlated variables. Selection vs Control, for example, doesn’t have a clear definition/criteria but rather points at a number of correlated things, like internal structure, search, maps, control-like things, etc.
Sure, deconfusing and pointing out clusters is useful because clusters imply correlations and correlations perhaps imply hidden structure + relationships—but I think the costs from cluster-representing-words doing hidden inference is much greater than the benefits, and it would be better to explicitly lay out the features-of-clusters that the one is referring to instead of just using the name of the cluster.
This is similar to the trouble I had with “wrapper-minds,” which is yet another example of a cluster pointing at a bunch of correlated variables, and people using the same term to mean different things.
Anyways, I still feel totally confused about optimization—and while these clusters/frames are useful, I think thinking in terms of them would ensue even more confusion within myself. It’s probably better to take the useful individual parts within the cluster and start deconfusing from the ground-up using those as the building blocks.