a perfect Bayesian reasoner is computationally intractable, and our mental algorithms make for an excellent, possibly close to an optimal, use of the limited computational resources we happen to have available
Looking at Sandberg and Bostrom’s The Wisdom of Nature: An Evolutionary Heuristic for Human Enhancement, we see that there are several reasons why the human brain’s native algorithms are unlikely to be anything close to optimal, even given the limited computational resources we happen to have available inside our skulls:
Changed tradeoffs. Evolution ‘‘designed’’ the system for operation in one type of environment, but now we wish to deploy it in a very different type of environment
Value discordance. There is a discrepancy between the standards by which evolution measured the quality of her work, and the standards that we wish to apply.
Evolutionary restrictions. Sometimes the evolutionary algorithm just can’t find certain solutions, for example because it gets stuck in local optima.
In the case of our cognitive algorithms, the “Changed tradeoffs” item seems particularly likely to be an issue. Our information rich environment means that highly accurate information can be obtained much, much more easily than in the EEA, but it requires careful rational analysis.
Changed tradeoffs. Evolution ‘‘designed’’ the system for operation in one type of environment, but now we wish to deploy it in a very different type of environment
An important question—how changed is the environment, really? Yes, there are plenty of cases where a changed environment is obviously breaking our evolved reasoning algorithms, but I suspect many people might be overstating the difference.
Value discordance. There is a discrepancy between the standards by which evolution measured the quality of her work, and the standards that we wish to apply.
At the risk of falling into a purely semantic discussion, this doesn’t mean the algorithms wouldn’t be optimal. It just makes them optimized for some other purpose than the one we’d prefer.
An important question—how changed is the environment, really?
That’s a great discussion to have. I’d say the biggest changes are that a modern person interacts with a lot of other people and receives a lot of symbolic information. Other “major” changes, like increased availability of food or better infant healthcare, look to me minor by comparison. Not sure how to weigh this stuff, though.
I suspect the optimal evolved system in a modern environment (efficient and effective) is an idiot savant that can live long enough to spit out the source code for an AI guaranteed to increase the inclusive fitness of the genes of the host.
Genetic engineering and sperm/egg donation are other modern inventions I don’t think we are all exploiting to increase our fitness optimally.
One of the fundamental ways the environment has changed locally must be the level of information that we are now able to process. Namely, since writing was invented, we’ve been able to consume (I would suppose) far more knowledge from far more sources. But, after all, since writing is just like a mimic of speech that we were originally “designed” for, I can’t imagine the modern environment is so much different for our built in algorithms for writing. And similarly for many other “modern” aspects of life.
Edit: Interestingly, I suppose books and written information have essentially developed in civilization as a response to the weaknesses of the evolved brain. Thus, many of the deficiencies in our cognitive operations have actually been attacked by civilization. Insofar as the brain was not properly designed, the modern environment has largely been a source of positive, external cognitive optimization/reorganization.
One might propose that the environment has actually become far less challenging in modern times; certainly I haven’t had to hunt and kill for food anytime in recent memory. Now, I can live far longer, with much less (positive) stress, I can smoke and drink and damage my mind at will, I have the express ability to become morbidly obese and mentally unhealthy, and so on. I can freely read and absorb widely disseminated propaganda from sources like Hitler, in maybe the worst case scenario. Perhaps the environment has been effectively weakening our internal algorithms through this kind of under usage and exploitation, rather than through any incidental non-optimization.
Good point. Civilization allows to use the strengths of our native makeup more efficiently, thus instead of being “disadjusted” because of change since the EEA, in many areas we are more at home than could ever be naturally.
We have to do far more very-long-term planning than in the EEA, we are protected from starvation by easy job markets and stable food sources like food shops, we have access to healthcare, both mental and physical.
Most prominently, our explicit beliefs matter more for decision theory than for signalling, whereas in the EEA the opposite was true.
We have to do far more very-long-term planning than in the EEA
As societies, perhaps. As individuals, probably not. I find it a bit odd that you mention a decreased risk of starvation at the same time as this item; needing to look forward a year or preferably several to make sure you didn’t run out of food during the winter (or the winter after that) has been a major factor in the past. Even if you lived in a warm country, it seems like there would have been more long-term dangers than there are now, when we have a variety of safety networks and a much safer society.
Most prominently, our explicit beliefs matter more for decision theory than for signalling, whereas in the EEA the opposite was true.
Existential risks excluded, I’m not sure if this is true.
Hunter-gatherers, possibly not, but we’ve had agriculture around for 10,000 years. That has been enough time for other selection effects (for instance, the persistent domestication of cattle, and the associated dairying activities, did alter the selective environments of some human populations for sufficient generations to select for genes that today confer greater adult lactose tolerance), so I’d be cautious about putting too much weight on the hunter-gatherer environment.
interesting. So in fact for those adaptations that could be implemented in just 10,000⁄20 = 500 generations are probably more skewed towards rationality.
We can probably see the difference that those 500 generations made by the differences in life outcomes between those with aboriginal Australian DNA and white European DNA.
hmmm well I was actually considering the point purely from an academic POV—it occurred to me that the aboriginals were a near-perfect example. But now that you point it out, I guess that comment could be construed as “in bad taste” or “racist” or something.
If you mean optimal as in “maximizing accuracy given the processing power”, then yes. But in terms of “maximizing accuracy given the data”, then Bayesian reasoning is optimal from the definition of conditional probability.
Maximizing accuracy given available processing power and available data is the core problem when it comes to finding a good decision theory.
We don’t ask what decision theory God should use but what decision theory humans should use.
Both go and chess are NP-hard and with can’t be fully processed even if you have a computer build from all atoms in the universe.
You’re confusing optimality in terms of results and efficiency in terms of computing power with your use of “NP-hard”. Something like the travelling salesman problem is NP-hard in that there’s no known way to solve them beyond a certain efficiency in terms of computing power (how to do optimally on them in terms of results is easy). It doesn’t apply to chess or go in that there is no known way to get optimal results no matter how much computing power you have. These are two completely different things.
Surely there is a known way to play chess and go optimally (in the sense of always either winning or forcing a draw). You just search through the entire game tree, instead of a sub-tree, using the standard minimax algorithm to choose the best move each turn. This is obviously completely computationally infeasible, but possible in principle. See Solved game
It would be extraordinary if the algorithm that is optimal given infinite computational resource is also optimal given limited resource.
I suspect that by framing this as a battle between Bayesian inference and actual evolved human algorithms, we are missing the third alternative: algorithm X, which is the optimal algorithm for decision-making given the resources and options that we have in the society that we find ourselves in.
Looking at Sandberg and Bostrom’s The Wisdom of Nature: An Evolutionary Heuristic for Human Enhancement, we see that there are several reasons why the human brain’s native algorithms are unlikely to be anything close to optimal, even given the limited computational resources we happen to have available inside our skulls:
Changed tradeoffs. Evolution ‘‘designed’’ the system for operation in one type of environment, but now we wish to deploy it in a very different type of environment
Value discordance. There is a discrepancy between the standards by which evolution measured the quality of her work, and the standards that we wish to apply.
Evolutionary restrictions. Sometimes the evolutionary algorithm just can’t find certain solutions, for example because it gets stuck in local optima.
In the case of our cognitive algorithms, the “Changed tradeoffs” item seems particularly likely to be an issue. Our information rich environment means that highly accurate information can be obtained much, much more easily than in the EEA, but it requires careful rational analysis.
An important question—how changed is the environment, really? Yes, there are plenty of cases where a changed environment is obviously breaking our evolved reasoning algorithms, but I suspect many people might be overstating the difference.
At the risk of falling into a purely semantic discussion, this doesn’t mean the algorithms wouldn’t be optimal. It just makes them optimized for some other purpose than the one we’d prefer.
That’s a great discussion to have. I’d say the biggest changes are that a modern person interacts with a lot of other people and receives a lot of symbolic information. Other “major” changes, like increased availability of food or better infant healthcare, look to me minor by comparison. Not sure how to weigh this stuff, though.
We now also have computers.
I suspect the optimal evolved system in a modern environment (efficient and effective) is an idiot savant that can live long enough to spit out the source code for an AI guaranteed to increase the inclusive fitness of the genes of the host.
Genetic engineering and sperm/egg donation are other modern inventions I don’t think we are all exploiting to increase our fitness optimally.
One of the fundamental ways the environment has changed locally must be the level of information that we are now able to process. Namely, since writing was invented, we’ve been able to consume (I would suppose) far more knowledge from far more sources. But, after all, since writing is just like a mimic of speech that we were originally “designed” for, I can’t imagine the modern environment is so much different for our built in algorithms for writing. And similarly for many other “modern” aspects of life.
Edit: Interestingly, I suppose books and written information have essentially developed in civilization as a response to the weaknesses of the evolved brain. Thus, many of the deficiencies in our cognitive operations have actually been attacked by civilization. Insofar as the brain was not properly designed, the modern environment has largely been a source of positive, external cognitive optimization/reorganization.
One might propose that the environment has actually become far less challenging in modern times; certainly I haven’t had to hunt and kill for food anytime in recent memory. Now, I can live far longer, with much less (positive) stress, I can smoke and drink and damage my mind at will, I have the express ability to become morbidly obese and mentally unhealthy, and so on. I can freely read and absorb widely disseminated propaganda from sources like Hitler, in maybe the worst case scenario. Perhaps the environment has been effectively weakening our internal algorithms through this kind of under usage and exploitation, rather than through any incidental non-optimization.
Good point. Civilization allows to use the strengths of our native makeup more efficiently, thus instead of being “disadjusted” because of change since the EEA, in many areas we are more at home than could ever be naturally.
We have to do far more very-long-term planning than in the EEA, we are protected from starvation by easy job markets and stable food sources like food shops, we have access to healthcare, both mental and physical.
Most prominently, our explicit beliefs matter more for decision theory than for signalling, whereas in the EEA the opposite was true.
As societies, perhaps. As individuals, probably not. I find it a bit odd that you mention a decreased risk of starvation at the same time as this item; needing to look forward a year or preferably several to make sure you didn’t run out of food during the winter (or the winter after that) has been a major factor in the past. Even if you lived in a warm country, it seems like there would have been more long-term dangers than there are now, when we have a variety of safety networks and a much safer society.
Existential risks excluded, I’m not sure if this is true.
Example: deciding to study at school rather than slack off.
Granted.
Did hunter gatherers really look forward several winters ahead?
Hunter-gatherers, possibly not, but we’ve had agriculture around for 10,000 years. That has been enough time for other selection effects (for instance, the persistent domestication of cattle, and the associated dairying activities, did alter the selective environments of some human populations for sufficient generations to select for genes that today confer greater adult lactose tolerance), so I’d be cautious about putting too much weight on the hunter-gatherer environment.
interesting. So in fact for those adaptations that could be implemented in just 10,000⁄20 = 500 generations are probably more skewed towards rationality.
We can probably see the difference that those 500 generations made by the differences in life outcomes between those with aboriginal Australian DNA and white European DNA.
Why be needlessly inflammatory?
It provides an test for the theory?
hmmm well I was actually considering the point purely from an academic POV—it occurred to me that the aboriginals were a near-perfect example. But now that you point it out, I guess that comment could be construed as “in bad taste” or “racist” or something.
Cultural differences are hard to factor out, too.
The fact that human reasoning isn’t optimal implies in no way that the intelligently designed algorithm of Bayesian reasoning is better.
If you mean optimal as in “maximizing accuracy given the processing power”, then yes. But in terms of “maximizing accuracy given the data”, then Bayesian reasoning is optimal from the definition of conditional probability.
Maximizing accuracy given available processing power and available data is the core problem when it comes to finding a good decision theory.
We don’t ask what decision theory God should use but what decision theory humans should use. Both go and chess are NP-hard and with can’t be fully processed even if you have a computer build from all atoms in the universe.
You’re confusing optimality in terms of results and efficiency in terms of computing power with your use of “NP-hard”. Something like the travelling salesman problem is NP-hard in that there’s no known way to solve them beyond a certain efficiency in terms of computing power (how to do optimally on them in terms of results is easy). It doesn’t apply to chess or go in that there is no known way to get optimal results no matter how much computing power you have. These are two completely different things.
Surely there is a known way to play chess and go optimally (in the sense of always either winning or forcing a draw). You just search through the entire game tree, instead of a sub-tree, using the standard minimax algorithm to choose the best move each turn. This is obviously completely computationally infeasible, but possible in principle. See Solved game
Correct.
It would be extraordinary if the algorithm that is optimal given infinite computational resource is also optimal given limited resource.
I suspect that by framing this as a battle between Bayesian inference and actual evolved human algorithms, we are missing the third alternative: algorithm X, which is the optimal algorithm for decision-making given the resources and options that we have in the society that we find ourselves in.