According to the WP article’s section on epidemiology, possibly more than half of all people have a very weak form of myopia (0.5 to 1 diopters). The general amount of prevalence (as much as a third of population for significant myopia) is much bigger than could be explained solely by the proposed correlations (genetic or environmental).
To me this high prevalence and smooth distribution (in degree of myopia) suggests that it should just be treated as a weakness or a disease. We shouldn’t act surprised that such exist. It doesn’t even mean that it’s not selected against, as CronoDAS suggested (it would only be true within the last 50-100 years). Just that the selection isn’t strong enough and hasn’t been going on long enough to eliminate myopia. (With 30-50% prevalence, it would take quite strong selection effects.)
Why are you surprised that such defects exist? The average human body has lots of various defects. Compare: “many humans are physically incapable of the exertions required by the life of a professional Roman-era soldier, and couldn’t be trained for it no matter how much they tried.”
Maybe we should be surprised that so few defects exist, or maybe we shouldn’t be surprised at all—how can you tell?
The two factors this suggests to me, over that time period, are “increase in TV watching among young children” and “change in diet toward highly processed foods high in carbohydrates”. This hypothesis would also predict the finding that myopia increased faster among blacks than among whites, since these two factors have been stronger in poorer urban areas than in wealthier or more rural ones.
It didn’t begin then, but it certainly continued to shift in that direction. IIRC from The Omnivore’s Dilemma, it was under Nixon that massive corn subsidies began and vast corn surpluses became the norm, which led to a frenzy of new, cheap high-fructose-corn-syrup-based products as well as the use of corn for cow feed (which, since cows can’t digest corn effectively, led to a whole array of antibiotics and additives as the cheap solution).
Upshot: I’d expect that the diet changes in the 1970s through 1990s were quite substantial, that e.g. sodas became even cheaper and more ubiquitous, etc.
The surprise is that an incredibly highly selection-optimized trait isn’t selection-optimized to work at all in a surprising fraction of people (including myself). So many bits of optimization pressure exerted, only to choke on the last few.
Well then it’s not all that highly selection-optimized. The reality is that many people do have poor eyesight and they do survive and reproduce. Why do you expect stronger selection than is in fact the case?
Look, for thousands of generations, natural selection applied its limited quantity of optimization pressure toward refining the eye. But now it’s at a point where natural selection only needs a few more bits of optimization to effect a huge vision improvement by turning a great-but-broken eye into a great eye.
The fact that most people have fantastic vision shows that this trait is high utility for natural selection to optimize. So it’s astounding that natural selection doesn’t think it’s worth selecting for working fantastic eyes over broken fantastic eyes, when that selection only takes a few bits to make. Natural selection has already proved its willingness to spend way more bits on way less profound vision imrovements, get it?
As Eliezer pointed out, the modern prevalence of bad vision is probably due to developmental factors specific to the modern world.
Just because you can imagine a better eye, doesn’t mean that evolution will select for it. Evolution only selects for things that help the organisms it’s acting on produce children and grandchildren, and it seems at least plausible to me that perfect eyesight isn’t in that category, in humans. Even before we invented glasses, living in groups would have allowed us to assign the individuals with the best eyesight to do the tasks that required it, leaving those with a tendency toward nearsightedness to do less demanding tasks and still contribute to the tribe and win mates. In fact, in such a scenario it may even be plausible for nearsightedness to be selected for: It seems to me that someone assigned to fishing or planting would be less likely to be eaten by a tiger than someone assigned to hunting.
First of all I’m not “imagining a better eye”; by “fantastic eye” I mean the eye that natural selection spent 10,000 bits of optimization to create. Natural selection spent 10,000 bits for 10 units of eye goodness, then left 1⁄3 of us with a 5 bit optimization shortage that reduces our eye goodness by 3 units.
So I’m saying, if natural selection thought a unit of eye goodness is worth 1,000 bits, up to 10 units, why in modern humans doesn’t it purchase 3 whole units for only 5 bits—the same 3 units it previously purchased for 3333 bits?
I am aware of your general point that natural selection doesn’t always evolve things toward cool engineering accomplishments, but your just-so story about potential advantages of nearsightedness doesn’t reduce my surprise.
Your strength as a rationalist is to be more confused by fiction than by reality. Making up a story to explain the facts in retrospect is not a reliable algorithm for guessing the causal structure of eye-goodness and its consequences. So don’t increase the posterior probability of observing the data as if your story is evidence for it—stay confused.
So I’m saying, if natural selection thought a unit of eye goodness is worth 1,000 bits, up to 10 units, why in modern humans doesn’t it purchase 3 whole units for only 5 bits—the same 3 units it previously purchased for 3333 bits?
Perhaps, in the current environment, those 3 units aren’t worth 5 bits, even though at one point they were worth 3,333 bits. (Evolution thoroughly ignores the sunk cost fallacy.)
This suggestion doesn’t preclude other hypotheses; in fact, I’m not even intending to suggest that it’s a particularly likely scenario—hence my use of the word plausible rather than anything more enthusiastic. But it is a plausible one, which you appeared to be vigorously denying was even possible earlier. Disregarding hypotheses for no good reason isn’t particularly good rationality, either.
A priori, I wouldn’t have expected such a high-resolution retina to evolve in the first place, if the lens in front of it wouldn’t have allowed one to take full advantage of it anyway. So I would have expected the resolving power of the lens to roughly match the resolution of the retina. (Well, oversampling can prevent moiré effects, but how likely was that to be an issue in the EEA?)
According to the WP article’s section on epidemiology, possibly more than half of all people have a very weak form of myopia (0.5 to 1 diopters). The general amount of prevalence (as much as a third of population for significant myopia) is much bigger than could be explained solely by the proposed correlations (genetic or environmental).
To me this high prevalence and smooth distribution (in degree of myopia) suggests that it should just be treated as a weakness or a disease. We shouldn’t act surprised that such exist. It doesn’t even mean that it’s not selected against, as CronoDAS suggested (it would only be true within the last 50-100 years). Just that the selection isn’t strong enough and hasn’t been going on long enough to eliminate myopia. (With 30-50% prevalence, it would take quite strong selection effects.)
Why are you surprised that such defects exist? The average human body has lots of various defects. Compare: “many humans are physically incapable of the exertions required by the life of a professional Roman-era soldier, and couldn’t be trained for it no matter how much they tried.”
Maybe we should be surprised that so few defects exist, or maybe we shouldn’t be surprised at all—how can you tell?
The prevalence of myopia has increased dramatically since 1970.
The two factors this suggests to me, over that time period, are “increase in TV watching among young children” and “change in diet toward highly processed foods high in carbohydrates”. This hypothesis would also predict the finding that myopia increased faster among blacks than among whites, since these two factors have been stronger in poorer urban areas than in wealthier or more rural ones.
Hypotheses aside, good find!
Has this happened since 1970?
(The article suggests “computers and handheld devices.”)
It didn’t begin then, but it certainly continued to shift in that direction. IIRC from The Omnivore’s Dilemma, it was under Nixon that massive corn subsidies began and vast corn surpluses became the norm, which led to a frenzy of new, cheap high-fructose-corn-syrup-based products as well as the use of corn for cow feed (which, since cows can’t digest corn effectively, led to a whole array of antibiotics and additives as the cheap solution).
Upshot: I’d expect that the diet changes in the 1970s through 1990s were quite substantial, that e.g. sodas became even cheaper and more ubiquitous, etc.
The surprise is that an incredibly highly selection-optimized trait isn’t selection-optimized to work at all in a surprising fraction of people (including myself). So many bits of optimization pressure exerted, only to choke on the last few.
Well then it’s not all that highly selection-optimized. The reality is that many people do have poor eyesight and they do survive and reproduce. Why do you expect stronger selection than is in fact the case?
Look, for thousands of generations, natural selection applied its limited quantity of optimization pressure toward refining the eye. But now it’s at a point where natural selection only needs a few more bits of optimization to effect a huge vision improvement by turning a great-but-broken eye into a great eye.
The fact that most people have fantastic vision shows that this trait is high utility for natural selection to optimize. So it’s astounding that natural selection doesn’t think it’s worth selecting for working fantastic eyes over broken fantastic eyes, when that selection only takes a few bits to make. Natural selection has already proved its willingness to spend way more bits on way less profound vision imrovements, get it?
As Eliezer pointed out, the modern prevalence of bad vision is probably due to developmental factors specific to the modern world.
Just because you can imagine a better eye, doesn’t mean that evolution will select for it. Evolution only selects for things that help the organisms it’s acting on produce children and grandchildren, and it seems at least plausible to me that perfect eyesight isn’t in that category, in humans. Even before we invented glasses, living in groups would have allowed us to assign the individuals with the best eyesight to do the tasks that required it, leaving those with a tendency toward nearsightedness to do less demanding tasks and still contribute to the tribe and win mates. In fact, in such a scenario it may even be plausible for nearsightedness to be selected for: It seems to me that someone assigned to fishing or planting would be less likely to be eaten by a tiger than someone assigned to hunting.
First of all I’m not “imagining a better eye”; by “fantastic eye” I mean the eye that natural selection spent 10,000 bits of optimization to create. Natural selection spent 10,000 bits for 10 units of eye goodness, then left 1⁄3 of us with a 5 bit optimization shortage that reduces our eye goodness by 3 units.
So I’m saying, if natural selection thought a unit of eye goodness is worth 1,000 bits, up to 10 units, why in modern humans doesn’t it purchase 3 whole units for only 5 bits—the same 3 units it previously purchased for 3333 bits?
I am aware of your general point that natural selection doesn’t always evolve things toward cool engineering accomplishments, but your just-so story about potential advantages of nearsightedness doesn’t reduce my surprise.
Your strength as a rationalist is to be more confused by fiction than by reality. Making up a story to explain the facts in retrospect is not a reliable algorithm for guessing the causal structure of eye-goodness and its consequences. So don’t increase the posterior probability of observing the data as if your story is evidence for it—stay confused.
Perhaps, in the current environment, those 3 units aren’t worth 5 bits, even though at one point they were worth 3,333 bits. (Evolution thoroughly ignores the sunk cost fallacy.)
This suggestion doesn’t preclude other hypotheses; in fact, I’m not even intending to suggest that it’s a particularly likely scenario—hence my use of the word plausible rather than anything more enthusiastic. But it is a plausible one, which you appeared to be vigorously denying was even possible earlier. Disregarding hypotheses for no good reason isn’t particularly good rationality, either.
A priori, I wouldn’t have expected such a high-resolution retina to evolve in the first place, if the lens in front of it wouldn’t have allowed one to take full advantage of it anyway. So I would have expected the resolving power of the lens to roughly match the resolution of the retina. (Well, oversampling can prevent moiré effects, but how likely was that to be an issue in the EEA?)