One guess, pointed out in the original comments, might be that there is reason to prefer certainty when making deals with untrustworthy agents. For instance, if I promise you a certain $24,000, and then you don’t get it, you know for sure that I lied, as does everyone else who was aware of the deal, which is pretty bad for me. If I promise you a 33⁄34 chance of $27,000 then if you don’t get it I can always claim you were just unlucky, giving me at least plausible deniability. Thus there is significant reason for you to prefer the first, since the more I have to lose by betraying you the less likely I am to do it. The same argument does not carry in the case of 33% versus 34%.
I suspect that with infinite computational power on all sides this effect would vanish, and failing to deliver on any deal would decrease my trustworthiness by a certain amount depending on the plausibility of other explanations. However, humans don’t have infinite computational power, so we tend to just save time by labelling people as “trustworthy” or “untrustworthy” meaning, creating the incentive to bias towards absolute promises rather than probabilistic promises.
Of course, this is all quite complicated, it’s just one thought that springs to mind. It may be better just to favour the null hypothesis of “evolution is stupid, the human brain is a massive kludge that doesn’t normally operate on anything resembling expected utility, massive mistakes are to be expected”.
Another guess is that using numbers to describe probability is new enough that our brains haven’t had time to evolve any way of dealing with the difference between 33% and 34%. The concept of certainty has been around for a lot longer.
There’s a negative pregnant in your statement that makes me think you believe in very recent human evolution. Is there reason to think humans have undergone any biological evolution since the development of agriculture?
Aside from lactose tolerance (or more accurately, lactase persistence, as the “wild type” is “intolerance”), there are differences in enzyme quantities in saliva due to copy number variations between those populations which have a history of consuming carbohydrates and those which do not. There are also the various resistances to malaria. For multiple reasons, including history (e.g., malaria seems to have become endemic in the Mediterranean over the course of the Roman Empire), we know these are all new, anywhere from 6,000 to 500 years before the present. I can give other examples, but these are the most clear and distinct in the literature.
The ability of adults in certain populations to digest lactose is evidence that biological evolution of humans has occurred (since the domestication of animals?).
Different populations have different susceptibility to malaria. Am I correct that this is referring to sickle-cell trait and similar things?
If true, that seems moderately strong evidence of biological evolution of humans since the beginning of recorded history (I’m using that interchangeably with the development of agriculture). I’m interested in the evidence for very short-term evolution in humans (<500 years) if you have something that’s easy to cite.
My original point was that I’m skeptical that “social pattern” portions of our brain have undergone biological evolution since the development of agriculture. And the OP about changes in the brain allowing greater understanding of statistics seemed like that kind of assertion.
And the OP about changes in the brain allowing greater understanding of statistics seemed like that kind of assertion.
AFAICT I asserted the opposite of that. I said we haven’t had recent changes in the brain allowing for greater understanding of statistics, and that’s why we’re so bad at them.
You’re opening up a bigger debate here. I recall that Razib Khan often posts on this subject (there’s plenty of evidence, but lots of distinctions to be made) on Gene Expression.
Four reasons: Variation, selection, retention and competition. If you mean biological evolution with definite and noticeable effects in the general population lactose tolerance is an obvious example.
The vertebrate retina is a kludge, but we don’t have a percentage of the population with octopus-style retinas, so there’s no selectable variance to favor the genes that that produce octopus-type retinas. Similarly, we can’t evolve a proper set of long back bones because there’s no variance in the human population to use to select against our ludicrous stacked vertebrae arrangement.
But the degree to which people favor certainty does vary, and accordingly it is vulnerable to selection pressure. There accordingly must be a why as to the continued existence of certainty bias.
Perhaps all variation in certainty favouring is simply due to environmental factors. Remember that all complex adaptions must be universal so there must be a simple difference, something like single gene present or absent, which controls how much someone desires certainty for any of the variance to be genetic.
Even if some is genetic, I would guess that the primary difference is in which side of the system1 vs system 2 dichotomy is more likely to win. This affects lots of things other than certainty bias, and so may have been kept where it is by many other factors, with the last being an unfortunate side effect of the general way in which system 1 works (in particular, that system 1 seems bad at expressing nuances and continuous ranges, it sees the world almost entirely in good vs bad dichotomies).
Certainly there are no true expected utility maximisers out there, so it is no surprise that we should violate expected utility maximisation in some way.
Even having said that, if you demand an explanation the one I just gave still seems reasonably good.
Remember that all complex adaptions must be universal so there must be a simple difference, something like single gene present or absent, which controls how much someone desires certainty for any of the variance to be genetic.
This doesn’t appear to be the case for genetic variation in intelligence. (Also, I don’t see how it follows in the first place.)
Any complex adaptation, requiring many genes to work together, cannot all evolve at once, it would be too unlikely a mutation. Instead, pieces evolve one by one, each individually useful in the context they first appear. However, there is not enough selection pressure to evolve a new piece unless the old pieces are already universal, so you would not expect anything complicated to exist in some but not all members of a species.
With intelligence, it seems like many different factors can affect it on the margins, because the brain is a complex organ that can be slowed down, sped up or damaged in many ways. However, I do not notice a particularly wide intelligence spread among humans, only in rare cases where something is genuinely broken do we find someone less intelligent than a chimpanzee, and we literally never find someone more intelligent by an equivalent amount.
Any complex adaptation, requiring many genes to work together, cannot all evolve at once, it would be too unlikely a mutation. Instead, pieces evolve one by one, each individually useful in the context they first appear. However, there is not enough selection pressure to evolve a new piece unless the old pieces are already universal, so you would not expect anything complicated to exist in some but not all members of a species.
I get that. I don’t see how that could imply that quantitative variation must be controlled by a single gene.
I also don’t see how the magnitude of variation in intelligence affects the argument (“particularly wide intelligence spread” is subjective).
It doesn’t quite have to be controlled by a single gene, I was giving an example. Something like height, which is affected by many factors, could be affected by lots of single gene substitutions, but you would expect the over-all effect to look like an averaging out, not like some humans having one set of decision making machinery and others having a totally different set.
Perhaps all variation in certainty favouring is simply due to environmental factors.
Could very well be.
Even having said that, if you demand an explanation the one I just gave still seems reasonably good.
Yes, it does. I prefer the one paulfchristiano made, since it applies to a wider range of circumstances (interpersonal and environmental), but the untrustworthy agent explanation works well enough.
One guess, pointed out in the original comments, might be that there is reason to prefer certainty when making deals with untrustworthy agents. For instance, if I promise you a certain $24,000, and then you don’t get it, you know for sure that I lied, as does everyone else who was aware of the deal, which is pretty bad for me. If I promise you a 33⁄34 chance of $27,000 then if you don’t get it I can always claim you were just unlucky, giving me at least plausible deniability. Thus there is significant reason for you to prefer the first, since the more I have to lose by betraying you the less likely I am to do it. The same argument does not carry in the case of 33% versus 34%.
I suspect that with infinite computational power on all sides this effect would vanish, and failing to deliver on any deal would decrease my trustworthiness by a certain amount depending on the plausibility of other explanations. However, humans don’t have infinite computational power, so we tend to just save time by labelling people as “trustworthy” or “untrustworthy” meaning, creating the incentive to bias towards absolute promises rather than probabilistic promises.
Of course, this is all quite complicated, it’s just one thought that springs to mind. It may be better just to favour the null hypothesis of “evolution is stupid, the human brain is a massive kludge that doesn’t normally operate on anything resembling expected utility, massive mistakes are to be expected”.
Another guess is that using numbers to describe probability is new enough that our brains haven’t had time to evolve any way of dealing with the difference between 33% and 34%. The concept of certainty has been around for a lot longer.
There’s a negative pregnant in your statement that makes me think you believe in very recent human evolution. Is there reason to think humans have undergone any biological evolution since the development of agriculture?
Aside from lactose tolerance (or more accurately, lactase persistence, as the “wild type” is “intolerance”), there are differences in enzyme quantities in saliva due to copy number variations between those populations which have a history of consuming carbohydrates and those which do not. There are also the various resistances to malaria. For multiple reasons, including history (e.g., malaria seems to have become endemic in the Mediterranean over the course of the Roman Empire), we know these are all new, anywhere from 6,000 to 500 years before the present. I can give other examples, but these are the most clear and distinct in the literature.
Let me make sure I’m understanding correctly.
The ability of adults in certain populations to digest lactose is evidence that biological evolution of humans has occurred (since the domestication of animals?).
Different populations have different susceptibility to malaria. Am I correct that this is referring to sickle-cell trait and similar things?
If true, that seems moderately strong evidence of biological evolution of humans since the beginning of recorded history (I’m using that interchangeably with the development of agriculture). I’m interested in the evidence for very short-term evolution in humans (<500 years) if you have something that’s easy to cite.
My original point was that I’m skeptical that “social pattern” portions of our brain have undergone biological evolution since the development of agriculture. And the OP about changes in the brain allowing greater understanding of statistics seemed like that kind of assertion.
AFAICT I asserted the opposite of that. I said we haven’t had recent changes in the brain allowing for greater understanding of statistics, and that’s why we’re so bad at them.
You’re opening up a bigger debate here. I recall that Razib Khan often posts on this subject (there’s plenty of evidence, but lots of distinctions to be made) on Gene Expression.
Four reasons: Variation, selection, retention and competition. If you mean biological evolution with definite and noticeable effects in the general population lactose tolerance is an obvious example.
The vertebrate retina is a kludge, but we don’t have a percentage of the population with octopus-style retinas, so there’s no selectable variance to favor the genes that that produce octopus-type retinas. Similarly, we can’t evolve a proper set of long back bones because there’s no variance in the human population to use to select against our ludicrous stacked vertebrae arrangement.
But the degree to which people favor certainty does vary, and accordingly it is vulnerable to selection pressure. There accordingly must be a why as to the continued existence of certainty bias.
Perhaps all variation in certainty favouring is simply due to environmental factors. Remember that all complex adaptions must be universal so there must be a simple difference, something like single gene present or absent, which controls how much someone desires certainty for any of the variance to be genetic.
Even if some is genetic, I would guess that the primary difference is in which side of the system1 vs system 2 dichotomy is more likely to win. This affects lots of things other than certainty bias, and so may have been kept where it is by many other factors, with the last being an unfortunate side effect of the general way in which system 1 works (in particular, that system 1 seems bad at expressing nuances and continuous ranges, it sees the world almost entirely in good vs bad dichotomies).
Certainly there are no true expected utility maximisers out there, so it is no surprise that we should violate expected utility maximisation in some way.
Even having said that, if you demand an explanation the one I just gave still seems reasonably good.
This doesn’t appear to be the case for genetic variation in intelligence. (Also, I don’t see how it follows in the first place.)
Any complex adaptation, requiring many genes to work together, cannot all evolve at once, it would be too unlikely a mutation. Instead, pieces evolve one by one, each individually useful in the context they first appear. However, there is not enough selection pressure to evolve a new piece unless the old pieces are already universal, so you would not expect anything complicated to exist in some but not all members of a species.
With intelligence, it seems like many different factors can affect it on the margins, because the brain is a complex organ that can be slowed down, sped up or damaged in many ways. However, I do not notice a particularly wide intelligence spread among humans, only in rare cases where something is genuinely broken do we find someone less intelligent than a chimpanzee, and we literally never find someone more intelligent by an equivalent amount.
I get that. I don’t see how that could imply that quantitative variation must be controlled by a single gene.
I also don’t see how the magnitude of variation in intelligence affects the argument (“particularly wide intelligence spread” is subjective).
It doesn’t quite have to be controlled by a single gene, I was giving an example. Something like height, which is affected by many factors, could be affected by lots of single gene substitutions, but you would expect the over-all effect to look like an averaging out, not like some humans having one set of decision making machinery and others having a totally different set.
Could very well be.
Yes, it does. I prefer the one paulfchristiano made, since it applies to a wider range of circumstances (interpersonal and environmental), but the untrustworthy agent explanation works well enough.