The main argument appears to be that on average, higher intelligence implies a higher rate of mental disorders such as autism and Asperger’s syndrome. I don’t see how this relates to humans “making themselves smarter”—supposedly, if we have the technology to improve our brains, we’ll of course also be able to get rid of the nasty side effects introduced by the alien god that’s been improving it for us thus far.
It’s also that if you take things that improve one side of mental performance it’s likely to harm another. This isn’t massively surprising to me: you’d expect that if upping a single hormone level or whatever would simply improve performance overall then evolution would have ‘found’ it. But presumably the same is true of giving performance-enhancing drugs to less intelligent animals—or, for that matter, giving people steroids etc. to increase their physical performance.
But just because drugs to make you run faster might lower your life expectancy, that doesn’t mean our current running speed is the best evolution or technology can achieve. The problem is that any complex adaptation, like intelligence, is going to be a ‘sweet spot’ in the sense that a random massive change in a single factor will make it less succesful. That doesn’t mean that evolution, or potentially much more sophisticated technological enhancement, can’t improve matters.
Also, the ‘something’s going to get worse’ principle only holds if what we consider bad is the same as what evolution selects against. It could in principle be true that humans became much more intelligent if they lost something that made them capable of defending themselves, reproducing, making allies or whatever. If our aims are different to what benefits our genes’ survival, we may well be able to improve on nature: as we do with artificial sweetners, sex with condoms and other cunning tricks.
There’s also the situation of “local maxima”: It’s possible (probable) that there are ways to make humans smarter through evolution, but the intermediate steps have poor results, causing a resistance to progress.
Diversity of a population plays a role too. If I’m well below Feynman level (and I am), then there’s a possibility that I can slightly improve my cognitive abilities without any negative consequences.
My experience with nootropics (racetams) seems to support this, as far as it is possible for anecdotal evidence.
Of course, that assumes that autism should be considered a mental disorder. Many of those on the autism spectrum don’t, whereas most of those with depression or high levels of anxiety do consider their condition to be a disorder. It looks a lot to me like status quo bias: if being more intelligent will cause our minds to become qualitatively different then we shouldn’t try to be more intelligent.
Many of those on the autism spectrum don’t, whereas most of those with depression or high levels of anxiety do consider their condition to be a disorder.
As a high-functioning autist; I would love for there to be a higher representation of fellow HFA’s in the population. Our learning functions would still be different from the baseline population (as it currently exists) but… I feel the world would be a better place if HFAs represented as much as 10% of the population. Beyond that, I have uncertainty.
It seems to me that some of the “high-functioning” / “low-functioning” autism distinction has actually to do with the comorbidity of various other disorders and disabilities; as well as with the quality of schooling and other care. There seem to be a number of autistic folks whose lives are complicated by PTSD from bad psychiatric care, institutionalization, abusive schooling situations, etc. Presumably, if ASD were more common and better understood, these would be less likely.
The main argument appears to be that on average, higher intelligence implies a higher rate of mental disorders such as autism and Asperger’s syndrome. I don’t see how this relates to humans “making themselves smarter”—supposedly, if we have the technology to improve our brains, we’ll of course also be able to get rid of the nasty side effects introduced by the alien god that’s been improving it for us thus far.
It’s also that if you take things that improve one side of mental performance it’s likely to harm another. This isn’t massively surprising to me: you’d expect that if upping a single hormone level or whatever would simply improve performance overall then evolution would have ‘found’ it. But presumably the same is true of giving performance-enhancing drugs to less intelligent animals—or, for that matter, giving people steroids etc. to increase their physical performance.
But just because drugs to make you run faster might lower your life expectancy, that doesn’t mean our current running speed is the best evolution or technology can achieve. The problem is that any complex adaptation, like intelligence, is going to be a ‘sweet spot’ in the sense that a random massive change in a single factor will make it less succesful. That doesn’t mean that evolution, or potentially much more sophisticated technological enhancement, can’t improve matters.
Also, the ‘something’s going to get worse’ principle only holds if what we consider bad is the same as what evolution selects against. It could in principle be true that humans became much more intelligent if they lost something that made them capable of defending themselves, reproducing, making allies or whatever. If our aims are different to what benefits our genes’ survival, we may well be able to improve on nature: as we do with artificial sweetners, sex with condoms and other cunning tricks.
There’s also the situation of “local maxima”: It’s possible (probable) that there are ways to make humans smarter through evolution, but the intermediate steps have poor results, causing a resistance to progress.
Diversity of a population plays a role too. If I’m well below Feynman level (and I am), then there’s a possibility that I can slightly improve my cognitive abilities without any negative consequences.
My experience with nootropics (racetams) seems to support this, as far as it is possible for anecdotal evidence.
Of course, that assumes that autism should be considered a mental disorder. Many of those on the autism spectrum don’t, whereas most of those with depression or high levels of anxiety do consider their condition to be a disorder. It looks a lot to me like status quo bias: if being more intelligent will cause our minds to become qualitatively different then we shouldn’t try to be more intelligent.
As a high-functioning autist; I would love for there to be a higher representation of fellow HFA’s in the population. Our learning functions would still be different from the baseline population (as it currently exists) but… I feel the world would be a better place if HFAs represented as much as 10% of the population. Beyond that, I have uncertainty.
It seems to me that some of the “high-functioning” / “low-functioning” autism distinction has actually to do with the comorbidity of various other disorders and disabilities; as well as with the quality of schooling and other care. There seem to be a number of autistic folks whose lives are complicated by PTSD from bad psychiatric care, institutionalization, abusive schooling situations, etc. Presumably, if ASD were more common and better understood, these would be less likely.
Then again, defining disorders by self-reporting isn’t that much more accurate than going with “any mental condition considered weird by the society”.
But that is how a lot of mental disorders are defined. See: attempts to medicalise non-heterosexuality.