It’s an interesting experiment, and probably a good teaching exercise under controlled conditions to teach people about falsificationism, but real theories are too complex and theories about human behavior are way too complex.
Take the “slam dunk” theory of evolution. If “Some people and animals are homosexual” was in there, I’d pick that as the lie without even looking at the other two (well, if I didn’t already know). There are some okay explanations of how homosexuality might fit into evolution, but they’re not the sort of thing most people would start thinking about unless they already knew homosexuality existed.
(Another example: plate tectonics and “Hawaii, right smack in the middle of a huge plate, is full of volcanoes”.)
Take the “slam dunk” theory of evolution. If “Some people and animals are homosexual” was in there, I’d pick that as the lie without even looking at the other two (well, if I didn’t already know).
A rationalist ends up being wrong sometimes, and can only hope for well-calibrated probabilities. I think that, in the absence of observation, this is the sort of prediction that most human-level intelligences would end up getting wrong, and I wouldn’t necessarily assume they were making any errors of rationality in doing so, but rather hitting the 1 out of 20 occasions when a 5% probability occurs.
it doesn’t prove their idea is totally wrong, only that reliance upon it would be.
As that bit shows, I agree completely. But while evolution is correct, you can’t use it to go around making broad factual inferences. While you should believe in evolution, you shouldn’t go around making statements like, “There are no homosexuals,” or “Every behaviour is adaptive in a fairly obvious way,” just because your theory predicts it. This exercise properly demonstrates that while the theory is true in a general sense, broad inferences based on a simplistic model of it are not appropriate.
But evolution really does make homosexuality less likely to occur. If given a set of biological statements like “some animals are homosexual” together with the theory of evolution, you will be able to get many more true/false labelings correct than if you did not have the theory of evolution. Sure, you’ll get that one wrong, but you’ll still get a lot more right than you otherwise would. (I read part of a book, in fact, whose title I can’t remember although I just tried awhile to look it up, about evolution, from a professor who teaches evolution, and the thesis was that armed only with the theory of evolution, you can correctly answer a large number of biological questions without knowing anything about the species involved.)
With complex theories and complex truths, you get statistical predictive value, rather than perfection. That doesn’t mean that testing your theories on real data (the basic idea behind this post) is a bad thing! It just means you need a larger data set.
AFAIK, myopia seems to be caused, at least in part, by spending a lot of time focusing on close objects (such as books, computer screens, blackboards, walls, etc.); it’s the result of another mismatch between the environment we live in and our genes. (Although it’s fairly easily corrected, so there’s not really any selection pressure against it these days.)
According to the studies referenced by the Wikipedia article, this is disputed and even if true would be, at most, a contributing factor active only in some of the cases. Even with no “near-work” many people would be myopic.
According to the WP article’s section on epidemiology, possibly more than half of all people have a very weak form of myopia (0.5 to 1 diopters). The general amount of prevalence (as much as a third of population for significant myopia) is much bigger than could be explained solely by the proposed correlations (genetic or environmental).
To me this high prevalence and smooth distribution (in degree of myopia) suggests that it should just be treated as a weakness or a disease. We shouldn’t act surprised that such exist. It doesn’t even mean that it’s not selected against, as CronoDAS suggested (it would only be true within the last 50-100 years). Just that the selection isn’t strong enough and hasn’t been going on long enough to eliminate myopia. (With 30-50% prevalence, it would take quite strong selection effects.)
Why are you surprised that such defects exist? The average human body has lots of various defects. Compare: “many humans are physically incapable of the exertions required by the life of a professional Roman-era soldier, and couldn’t be trained for it no matter how much they tried.”
Maybe we should be surprised that so few defects exist, or maybe we shouldn’t be surprised at all—how can you tell?
The two factors this suggests to me, over that time period, are “increase in TV watching among young children” and “change in diet toward highly processed foods high in carbohydrates”. This hypothesis would also predict the finding that myopia increased faster among blacks than among whites, since these two factors have been stronger in poorer urban areas than in wealthier or more rural ones.
It didn’t begin then, but it certainly continued to shift in that direction. IIRC from The Omnivore’s Dilemma, it was under Nixon that massive corn subsidies began and vast corn surpluses became the norm, which led to a frenzy of new, cheap high-fructose-corn-syrup-based products as well as the use of corn for cow feed (which, since cows can’t digest corn effectively, led to a whole array of antibiotics and additives as the cheap solution).
Upshot: I’d expect that the diet changes in the 1970s through 1990s were quite substantial, that e.g. sodas became even cheaper and more ubiquitous, etc.
The surprise is that an incredibly highly selection-optimized trait isn’t selection-optimized to work at all in a surprising fraction of people (including myself). So many bits of optimization pressure exerted, only to choke on the last few.
Well then it’s not all that highly selection-optimized. The reality is that many people do have poor eyesight and they do survive and reproduce. Why do you expect stronger selection than is in fact the case?
Look, for thousands of generations, natural selection applied its limited quantity of optimization pressure toward refining the eye. But now it’s at a point where natural selection only needs a few more bits of optimization to effect a huge vision improvement by turning a great-but-broken eye into a great eye.
The fact that most people have fantastic vision shows that this trait is high utility for natural selection to optimize. So it’s astounding that natural selection doesn’t think it’s worth selecting for working fantastic eyes over broken fantastic eyes, when that selection only takes a few bits to make. Natural selection has already proved its willingness to spend way more bits on way less profound vision imrovements, get it?
As Eliezer pointed out, the modern prevalence of bad vision is probably due to developmental factors specific to the modern world.
Just because you can imagine a better eye, doesn’t mean that evolution will select for it. Evolution only selects for things that help the organisms it’s acting on produce children and grandchildren, and it seems at least plausible to me that perfect eyesight isn’t in that category, in humans. Even before we invented glasses, living in groups would have allowed us to assign the individuals with the best eyesight to do the tasks that required it, leaving those with a tendency toward nearsightedness to do less demanding tasks and still contribute to the tribe and win mates. In fact, in such a scenario it may even be plausible for nearsightedness to be selected for: It seems to me that someone assigned to fishing or planting would be less likely to be eaten by a tiger than someone assigned to hunting.
First of all I’m not “imagining a better eye”; by “fantastic eye” I mean the eye that natural selection spent 10,000 bits of optimization to create. Natural selection spent 10,000 bits for 10 units of eye goodness, then left 1⁄3 of us with a 5 bit optimization shortage that reduces our eye goodness by 3 units.
So I’m saying, if natural selection thought a unit of eye goodness is worth 1,000 bits, up to 10 units, why in modern humans doesn’t it purchase 3 whole units for only 5 bits—the same 3 units it previously purchased for 3333 bits?
I am aware of your general point that natural selection doesn’t always evolve things toward cool engineering accomplishments, but your just-so story about potential advantages of nearsightedness doesn’t reduce my surprise.
Your strength as a rationalist is to be more confused by fiction than by reality. Making up a story to explain the facts in retrospect is not a reliable algorithm for guessing the causal structure of eye-goodness and its consequences. So don’t increase the posterior probability of observing the data as if your story is evidence for it—stay confused.
So I’m saying, if natural selection thought a unit of eye goodness is worth 1,000 bits, up to 10 units, why in modern humans doesn’t it purchase 3 whole units for only 5 bits—the same 3 units it previously purchased for 3333 bits?
Perhaps, in the current environment, those 3 units aren’t worth 5 bits, even though at one point they were worth 3,333 bits. (Evolution thoroughly ignores the sunk cost fallacy.)
This suggestion doesn’t preclude other hypotheses; in fact, I’m not even intending to suggest that it’s a particularly likely scenario—hence my use of the word plausible rather than anything more enthusiastic. But it is a plausible one, which you appeared to be vigorously denying was even possible earlier. Disregarding hypotheses for no good reason isn’t particularly good rationality, either.
A priori, I wouldn’t have expected such a high-resolution retina to evolve in the first place, if the lens in front of it wouldn’t have allowed one to take full advantage of it anyway. So I would have expected the resolving power of the lens to roughly match the resolution of the retina. (Well, oversampling can prevent moiré effects, but how likely was that to be an issue in the EEA?)
It’s an interesting experiment, and probably a good teaching exercise under controlled conditions to teach people about falsificationism, but real theories are too complex and theories about human behavior are way too complex.
Take the “slam dunk” theory of evolution. If “Some people and animals are homosexual” was in there, I’d pick that as the lie without even looking at the other two (well, if I didn’t already know). There are some okay explanations of how homosexuality might fit into evolution, but they’re not the sort of thing most people would start thinking about unless they already knew homosexuality existed.
(Another example: plate tectonics and “Hawaii, right smack in the middle of a huge plate, is full of volcanoes”.)
A rationalist ends up being wrong sometimes, and can only hope for well-calibrated probabilities. I think that, in the absence of observation, this is the sort of prediction that most human-level intelligences would end up getting wrong, and I wouldn’t necessarily assume they were making any errors of rationality in doing so, but rather hitting the 1 out of 20 occasions when a 5% probability occurs.
As that bit shows, I agree completely. But while evolution is correct, you can’t use it to go around making broad factual inferences. While you should believe in evolution, you shouldn’t go around making statements like, “There are no homosexuals,” or “Every behaviour is adaptive in a fairly obvious way,” just because your theory predicts it. This exercise properly demonstrates that while the theory is true in a general sense, broad inferences based on a simplistic model of it are not appropriate.
But evolution really does make homosexuality less likely to occur. If given a set of biological statements like “some animals are homosexual” together with the theory of evolution, you will be able to get many more true/false labelings correct than if you did not have the theory of evolution. Sure, you’ll get that one wrong, but you’ll still get a lot more right than you otherwise would. (I read part of a book, in fact, whose title I can’t remember although I just tried awhile to look it up, about evolution, from a professor who teaches evolution, and the thesis was that armed only with the theory of evolution, you can correctly answer a large number of biological questions without knowing anything about the species involved.)
With complex theories and complex truths, you get statistical predictive value, rather than perfection. That doesn’t mean that testing your theories on real data (the basic idea behind this post) is a bad thing! It just means you need a larger data set.
Also: “the human eye sees objects in incredible detail, but a third of people’s eyes can’t effectively see stuff when it’s a few feet away”. Wtf.
Anyone got any insight about eyes or homos?
AFAIK, myopia seems to be caused, at least in part, by spending a lot of time focusing on close objects (such as books, computer screens, blackboards, walls, etc.); it’s the result of another mismatch between the environment we live in and our genes. (Although it’s fairly easily corrected, so there’s not really any selection pressure against it these days.)
According to the studies referenced by the Wikipedia article, this is disputed and even if true would be, at most, a contributing factor active only in some of the cases. Even with no “near-work” many people would be myopic.
According to the WP article’s section on epidemiology, possibly more than half of all people have a very weak form of myopia (0.5 to 1 diopters). The general amount of prevalence (as much as a third of population for significant myopia) is much bigger than could be explained solely by the proposed correlations (genetic or environmental).
To me this high prevalence and smooth distribution (in degree of myopia) suggests that it should just be treated as a weakness or a disease. We shouldn’t act surprised that such exist. It doesn’t even mean that it’s not selected against, as CronoDAS suggested (it would only be true within the last 50-100 years). Just that the selection isn’t strong enough and hasn’t been going on long enough to eliminate myopia. (With 30-50% prevalence, it would take quite strong selection effects.)
Why are you surprised that such defects exist? The average human body has lots of various defects. Compare: “many humans are physically incapable of the exertions required by the life of a professional Roman-era soldier, and couldn’t be trained for it no matter how much they tried.”
Maybe we should be surprised that so few defects exist, or maybe we shouldn’t be surprised at all—how can you tell?
The prevalence of myopia has increased dramatically since 1970.
The two factors this suggests to me, over that time period, are “increase in TV watching among young children” and “change in diet toward highly processed foods high in carbohydrates”. This hypothesis would also predict the finding that myopia increased faster among blacks than among whites, since these two factors have been stronger in poorer urban areas than in wealthier or more rural ones.
Hypotheses aside, good find!
Has this happened since 1970?
(The article suggests “computers and handheld devices.”)
It didn’t begin then, but it certainly continued to shift in that direction. IIRC from The Omnivore’s Dilemma, it was under Nixon that massive corn subsidies began and vast corn surpluses became the norm, which led to a frenzy of new, cheap high-fructose-corn-syrup-based products as well as the use of corn for cow feed (which, since cows can’t digest corn effectively, led to a whole array of antibiotics and additives as the cheap solution).
Upshot: I’d expect that the diet changes in the 1970s through 1990s were quite substantial, that e.g. sodas became even cheaper and more ubiquitous, etc.
The surprise is that an incredibly highly selection-optimized trait isn’t selection-optimized to work at all in a surprising fraction of people (including myself). So many bits of optimization pressure exerted, only to choke on the last few.
Well then it’s not all that highly selection-optimized. The reality is that many people do have poor eyesight and they do survive and reproduce. Why do you expect stronger selection than is in fact the case?
Look, for thousands of generations, natural selection applied its limited quantity of optimization pressure toward refining the eye. But now it’s at a point where natural selection only needs a few more bits of optimization to effect a huge vision improvement by turning a great-but-broken eye into a great eye.
The fact that most people have fantastic vision shows that this trait is high utility for natural selection to optimize. So it’s astounding that natural selection doesn’t think it’s worth selecting for working fantastic eyes over broken fantastic eyes, when that selection only takes a few bits to make. Natural selection has already proved its willingness to spend way more bits on way less profound vision imrovements, get it?
As Eliezer pointed out, the modern prevalence of bad vision is probably due to developmental factors specific to the modern world.
Just because you can imagine a better eye, doesn’t mean that evolution will select for it. Evolution only selects for things that help the organisms it’s acting on produce children and grandchildren, and it seems at least plausible to me that perfect eyesight isn’t in that category, in humans. Even before we invented glasses, living in groups would have allowed us to assign the individuals with the best eyesight to do the tasks that required it, leaving those with a tendency toward nearsightedness to do less demanding tasks and still contribute to the tribe and win mates. In fact, in such a scenario it may even be plausible for nearsightedness to be selected for: It seems to me that someone assigned to fishing or planting would be less likely to be eaten by a tiger than someone assigned to hunting.
First of all I’m not “imagining a better eye”; by “fantastic eye” I mean the eye that natural selection spent 10,000 bits of optimization to create. Natural selection spent 10,000 bits for 10 units of eye goodness, then left 1⁄3 of us with a 5 bit optimization shortage that reduces our eye goodness by 3 units.
So I’m saying, if natural selection thought a unit of eye goodness is worth 1,000 bits, up to 10 units, why in modern humans doesn’t it purchase 3 whole units for only 5 bits—the same 3 units it previously purchased for 3333 bits?
I am aware of your general point that natural selection doesn’t always evolve things toward cool engineering accomplishments, but your just-so story about potential advantages of nearsightedness doesn’t reduce my surprise.
Your strength as a rationalist is to be more confused by fiction than by reality. Making up a story to explain the facts in retrospect is not a reliable algorithm for guessing the causal structure of eye-goodness and its consequences. So don’t increase the posterior probability of observing the data as if your story is evidence for it—stay confused.
Perhaps, in the current environment, those 3 units aren’t worth 5 bits, even though at one point they were worth 3,333 bits. (Evolution thoroughly ignores the sunk cost fallacy.)
This suggestion doesn’t preclude other hypotheses; in fact, I’m not even intending to suggest that it’s a particularly likely scenario—hence my use of the word plausible rather than anything more enthusiastic. But it is a plausible one, which you appeared to be vigorously denying was even possible earlier. Disregarding hypotheses for no good reason isn’t particularly good rationality, either.
A priori, I wouldn’t have expected such a high-resolution retina to evolve in the first place, if the lens in front of it wouldn’t have allowed one to take full advantage of it anyway. So I would have expected the resolving power of the lens to roughly match the resolution of the retina. (Well, oversampling can prevent moiré effects, but how likely was that to be an issue in the EEA?)
That may be diet, not evolutionary equilibrium.