Probably it makes sense to emphasize that it’s the selection of the abstraction that implies a goal, not the use of the abstraction. If an abstraction shows up in an optimised thing, that’s evidence that whatever optimised it had a goal.
The set of things not influenced by any optimisation process is pretty small—so we’d probably have to be clearer in what counts as “non-optimized”. (I’m also not sure I’d want to say that selection processes need to have a ‘goal’ exactly.)
It strikes me that the argument you’re making might not say much about abstraction specifically—unless I’m missing something essential, it’d apply to any a-priori-unlikely configuration of information.
The set of things not influenced by any optimisation process is pretty small—so we’d probably have to be clearer in what counts as “non-optimized”. (I’m also not sure I’d want to say that selection processes need to have a ‘goal’ exactly.)
Both good points. “Goal” isn’t the best word for what selection processes move towards.
It strikes me that the argument you’re making might not say much about abstraction specifically—unless I’m missing something essential, it’d apply to any a-priori-unlikely configuration of information.
Besides just being an unlikely configuration of information, abstractions destroy sensory information that did not previously have much of a bearing on actions that increased fitness (or is “selection stability” a better term?).
Probably it makes sense to emphasize that it’s the selection of the abstraction that implies a goal, not the use of the abstraction. If an abstraction shows up in an optimised thing, that’s evidence that whatever optimised it had a goal.
That’s true. But do abstractions ever show up in non-optimized things? I can’t think of a single example.
The set of things not influenced by any optimisation process is pretty small—so we’d probably have to be clearer in what counts as “non-optimized”. (I’m also not sure I’d want to say that selection processes need to have a ‘goal’ exactly.)
It strikes me that the argument you’re making might not say much about abstraction specifically—unless I’m missing something essential, it’d apply to any a-priori-unlikely configuration of information.
Both good points. “Goal” isn’t the best word for what selection processes move towards.
Besides just being an unlikely configuration of information, abstractions destroy sensory information that did not previously have much of a bearing on actions that increased fitness (or is “selection stability” a better term?).