Why should we view minds as a subset of optimization processes, rather than optimization processes as a set containing “intelligence”, which is a particular feature of real minds? We tend to agree, for instance, that evolution is an optimization process, but to claim, “evolution has a mind”, would rightfully be thrown out as nonsense.
EDIT: More like, real minds as we experience them, human and animal, definitely seem to have a remarkable amount of things in them that don’t correspond to any kind of world-optimization at all. I think there’s a great confusion between “mind” and “intelligence” here.
Why should we view minds as a subset of optimization processes, rather than optimization processes as a set containing “intelligence”, which is a particular feature of real minds?
Basically, I’m making the claim that it could be reasonable to see “optimization” as a precondition to consider something a ‘mind’ rather than a ‘not-mind,’ but not the only one, or it wouldn’t be a subset. And here, really, what I mean is something like a closed control loop- it has inputs, it processes them, it has outputs dependent on the processed inputs, and when in a real environment it compresses the volume of potential future outcomes into a smaller, hopefully systematically different, volume.
We tend to agree, for instance, that evolution is an optimization process, but to claim, “evolution has a mind”, would rightfully be thrown out as nonsense.
Right, but “X is a subset of Y” in no way implies “any Y is an X.”
More like, real minds as we experience them, human and animal, definitely seem to have a remarkable amount of things in them that don’t correspond to any kind of world-optimization at all.
I am not confident in my ability to declare what parts of the brain serve no optimization purpose. I should clarify that by ‘optimization’ here I do mean the definition “make things somewhat better” for an arbitrary ‘better’ (this is the future volume compression remarked on earlier) rather than the “choose the absolute best option.”
I am not confident in my ability to declare what parts of the brain serve no optimization purpose. I should clarify that by ‘optimization’ here I do mean the definition “make things somewhat better” for an arbitrary ‘better’ (this is the future volume compression remarked on earlier) rather than the “choose the absolute best option.”
I think that for an arbitrary better, rather than a subjective better, this statement becomes tautological. You simply find the futures created by the system we’re calling a “mind” and declare them High Utility Futures simply by virtue of the fact that the system brought them about.
(And admittedly, humans have been using cui bono conspiracy-reasoning without actually considering what other people really value for thousands of years now.)
If we want to speak non-tautologically, then I maintain my objection that very little in psychology or subjective experience indicates a belief that the mind as such or as a whole has an optimization function, rather than intelligence having an optimization function as a particularly high-level adaptation that steps in when my other available adaptations prove insufficient for execution in a given context.
Why should we view minds as a subset of optimization processes, rather than optimization processes as a set containing “intelligence”, which is a particular feature of real minds? We tend to agree, for instance, that evolution is an optimization process, but to claim, “evolution has a mind”, would rightfully be thrown out as nonsense.
EDIT: More like, real minds as we experience them, human and animal, definitely seem to have a remarkable amount of things in them that don’t correspond to any kind of world-optimization at all. I think there’s a great confusion between “mind” and “intelligence” here.
Basically, I’m making the claim that it could be reasonable to see “optimization” as a precondition to consider something a ‘mind’ rather than a ‘not-mind,’ but not the only one, or it wouldn’t be a subset. And here, really, what I mean is something like a closed control loop- it has inputs, it processes them, it has outputs dependent on the processed inputs, and when in a real environment it compresses the volume of potential future outcomes into a smaller, hopefully systematically different, volume.
Right, but “X is a subset of Y” in no way implies “any Y is an X.”
I am not confident in my ability to declare what parts of the brain serve no optimization purpose. I should clarify that by ‘optimization’ here I do mean the definition “make things somewhat better” for an arbitrary ‘better’ (this is the future volume compression remarked on earlier) rather than the “choose the absolute best option.”
I think that for an arbitrary better, rather than a subjective better, this statement becomes tautological. You simply find the futures created by the system we’re calling a “mind” and declare them High Utility Futures simply by virtue of the fact that the system brought them about.
(And admittedly, humans have been using cui bono conspiracy-reasoning without actually considering what other people really value for thousands of years now.)
If we want to speak non-tautologically, then I maintain my objection that very little in psychology or subjective experience indicates a belief that the mind as such or as a whole has an optimization function, rather than intelligence having an optimization function as a particularly high-level adaptation that steps in when my other available adaptations prove insufficient for execution in a given context.