The fact that the educated and intelligent are sometimes in the wrong doesn’t mean it isn’t a good heuristic. Pretty much any heuristic is going to fail sometimes. The question is whether the heuristic is accurate (in the sense of being more often correct than not) and, if so, how accurate it is. This heuristic seems to be one where the general trend is clear. I can’t identify a single example other than Marxism in the last hundred years where the intellectual establishment has been very wrong, and even then, that’s an example where the general public in many areas also had a fair bit of support for that view.
I’m curious about your claim that that “intellectuals care much more about the status-signaling aspects of their opinions than the common folk.” This seems plausible to me, but I’d be curious what substantial evidence there for the claim.
Malthusian ideas about impending starvation or resource exhaustion due to population growth have been popular with intellectuals for a long time but particularly so in the last 100 years. Paul Ehrlich is a well known example. He famously lost his bet with economist Julian Simon on resource scarcity. His prediction in The Population Bomb in 1968 that India would never feed itself was proved wrong that same year. These ideas were generally widely held in intellectual circles (and still are) but there is a long track record of specific predictions relating to these theories that have proved wrong.
Another case that springs to mind: it looks increasingly likely that the mainstream advice on diet as embodied in things like the USDA food guide pyramid was deeply flawed. The dominant theory in the intellectual establishment regarding the relationship between fat, cholesterol and heart disease also looks pretty shaky in light of new research and evidence.
I’d also argue that the intellectual establishment over the latter half of the twentieth century has over emphasized the blank-slate / nurture side of the nature vs. nurture debate and neglected the evidence for a genetic basis to many human differences.
Population/natural resource exhaustion related crises are a bit iffy, because it is plainly obvious that if they remain exponentially growing forever, relative to linearly growing or constant resources (like room to live on), one or the other has got to give.
Mispredicting when it will happen is different from knowing that it has to happen eventually, and how could it not?
Even expanding into space won’t solve the problem, since the number of planets we can reach as time goes on is smaller than exponential population growth rates and demands for resources.
There are definitely plenty of other scientifically held views that get overturned here and there, though—another example is fever, which for centuries has been considered a negative side effect of an infection, but lately it’s been found to have beneficial properties, as certain elements of your immune system function better when the temperature rises (and certain viruses function worse). http://www.newscientist.com/article/mg20727711.400-fever-friend-or-foe.html
Population/natural resource exhaustion related crises are a bit iffy, because it is plainly obvious that if they remain exponentially growing forever, relative to linearly growing or constant resources (like room to live on), one or the other has got to give.
Obviously the people disputing the wrong predictions know this. Julian Simon was just as familiar with this trivial mathematical fact as Paul Ehrlich. The fact that this knowledge led Paul Ehrlich to make bad predictions indicates that his analysis was missing something that Julian Simon was considering. Often this missing something is a basic understanding of economics.
I can’t identify a single example other than Marxism in the last hundred years where the intellectual establishment has been very wrong, and even then, that’s an example where the general public in many areas also had a fair bit of support for that view.
Well, on any issue, there will be both intellectuals and non-intellectuals on all sides in some numbers. We can only observe how particular opinions correlate with various measures of intellectual status, and how prevalent they are among people who are in the upper strata by these measures. Marxism is a good example of an unsound belief (or rather a whole complex of beliefs) that was popular among intellectuals because its basic unsoundness is no longer seriously disputable. Other significant examples from the last hundred years are unfortunately a subject of at least some ongoing controversy; most of that period is still within living memory, after all.
Still, some examples that, in my view, should not be controversial given the present state of knowledge are various highbrow economic theories that managed to lead their intellectual fans into fallacies even deeper than those of the naive folk economics, the views of human nature and behavior of the sort criticized in Steven Pinker’s The Blank Slate, and a number of foreign policy questions in which the subsequent historical developments falsified the fashionable intellectual opinion so spectacularly that the contemporary troglodyte positions ended up looking good in comparison. There are other examples I have in mind, but those are probably too close to the modern hot-button issues to be worth bringing up.
The question is whether the heuristic is accurate (in the sense of being more often correct than not) and, if so, how accurate it is. This heuristic seems to be one where the general trend is clear.
Frankly, in matters of politics and ideology, I don’t find the trend so clear. To establish the existence of such a trend, we would have to define a clear metric for the goodness of outcomes of various policies, and then discuss and evaluate various hypothetical and counterfactual scenarios of policies that have historically found, or presently find, higher or lower favor among the (suitably defined) intellectual class.
This, however, doesn’t seem feasible in practice. Neither is it possible to evaluate the overall goodness of policy outcomes in an objective or universally agreed way (except perhaps in very extreme cases), nor is it possible to construct accurate hypotheticals in matters of such immense complexity where the law of unintended consequences lurks behind every corner.
I’m curious about your claim that that “intellectuals care much more about the status-signaling aspects of their opinions than the common folk.” This seems plausible to me, but I’d be curious what substantial evidence there for the claim.
My answer is similar to the earlier comment by Perplexed: given the definition of “intellectual” I assume, the claim is self-evident, in fact almost tautological.
I define “intellectuals” as people who derive a non-negligible part of their social status—either as public personalities or within their social networks—from the fact that other people show some esteem and interest for their opinions about issues that are outside the domain of mathematical, technical, or hard-scientific knowledge, and that are a matter of some public disagreement and controversy. This definition corresponds very closely to the normal usage of the term, and it implies directly that intellectuals will have unusually high stakes in the status-signaling implications of their beliefs.
OK, but apart from Marxism, nuclear power, coercive eugenics, Christianity, psychoanalysis, and the respective importance of nature and nurture—when has the intellectual establishment ever been an unreliable guide to finding truth?
Yeah, that provides some more examples. The elite was very worried about existential risks from nuclear war (“The Fate of the Earth”), resource shortages and mass starvation (“Club of Rome”), and technology-based totalitarianism (“1984”). Now, having been embarrassed by falling for too many cries of wolf (or at least, for worrying prematurely), they are wary of being burned again.
I don’t think worrying about nuclear war during the Cold War constituted either “crying wolf” or worrying prematurely. The Cuban Missile Crisis, the Able Archer 83 exercise (a year after “The Fate of the Earth” was published), and various false alert incidents could have resulted in nuclear war, and I’m not sure why anyone who opposed nuclear weapons at the time would be “embarrassed” in the light of what we now know.
I don’t think an existential risk has to be a certainty for it to be worth taking seriously.
In the US, concerns about some technology risks like EMP attacks and nuclear terrorism are still taken seriously, even though these are probably unlikely to happen and the damage would be much less severe than a nuclear war.
I don’t think an existential risk has to be a certainty for it to be worth taking seriously.
I agree. And nuclear war was certainly a risk that was worth taking seriously at the time.
However, that doesn’t make my last sentence any less true, especially if you replace “embarrassed” with “exhausted”. The risk of a nuclear war, somewhere, some time within the next 100 years, is still high—more likely than not, I would guess. It probably won’t destroy the human race, or even modern technology, but it could easily cost 400 million human lives. Yet, in part because people have become tired of worrying about such things, having already worried for decades, no one seems to be doing much about this danger.
When you say that no one seems to be doing much, are you sure that’s not just because the efforts don’t get much publicity?
There is a lot that’s being done:
Most nuclear-armed governments have massively reduced their nuclear weapon stockpiles, and try to stop other countries getting nuclear weapons. There’s an international effort to track fissile material.
After the Cold War ended, the west set up programmes to employ Soviet nuclear scientists which have run until today (Russia is about to end them).
South Africa had nuclear weapons, then gave them up.
Israel destroyed the Iraqi and Syrian nuclear programmes with airstrikes. OK, self-interested, but existing nuclear states stop their enemies getting nuclear weapons then it reduces the risk of a nuclear war.
Somebody wrote the Stuxnet worm to attack Iran’s enrichment facilities (probably) and Iran is under massive international pressure not to develop nuclear weapons.
Western leaders are at least talking about the goal of a world without nuclear weapons. OK, probably empty rhetoric.
India and Pakistan have reduced the tension between them, and now keep their nuclear weapons stored disassembled.
The US is developing missile defences to deter ‘rogue states’ who might have a limited nuclear missile capability (although I’m not sure why the threat of nuclear retaliation isn’t a better deterrent than shooting down missiles). The Western world is paranoid about nuclear terrorism, even putting nuclear detectors in its ports to try to detect weapons being smuggled into the country (which a lot of experts think is silly, but I guess it might make it harder to move fissile material around on the black market).
etc. etc.
Sure, in the 100 year timeframe, there is still a risk. It just seems like a world with two ideologically opposed nuclear-armed superpowers, with limited ways to gather information and their arsenals on a hair trigger, was much riskier than today’s situation. Even when “rogue states” get hold of nuclear weapons, they seem to want them to deter a US/UN invasion, rather than to actually use offensively.
Now, having been embarrassed by falling for too many cries of wolf (or at least, for worrying prematurely), they are wary of being burned again.
This doesn’t appear to be the case at all. There are a variety of claimed existential risks which the intellectual elite are in general quite worried about. They just don’t overlap much with the kind of risks people here talk about. Global warming is an obvious example (and some people here probably think they’re right on that one) but the overhyped fears of SARS and H1N1 killing millions of people look like recent examples of lessons about crying wolf not being learned.
I don’t know about SARS, but in the case of H1N1 it wasn’t “crying wolf” so much as being prepared for a potential pandemic which didn’t happen. I mean, very severe global flu pandemics have happened before. Just because H1N1 didn’t become as virulent as expected doesn’t mean that preparing for that eventuality was a waste of time.
Obviously the crux of the issue is whether the official probability estimates and predictions for these types of threats are accurate or not. It’s difficult to judge this in any individual case that fails to develop into a serious problem but if you can observe a consistent ongoing pattern of dire predictions that do not pan out this is evidence of an underlying bias in the estimates of risk. Preparing for an eventuality as if it had a 10% probability of happening when the true risk is 1% will lead to serious mis-allocation of resources.
It looks to me like there is a consistent pattern of overstating the risks of various catastrophes. Rigorously proving this is difficult. I’ve pointed to some examples of what look like over-confident predictions of disaster (there’s lots more in The Rational Optimist). I’m not sure we can easily resolve any remaining disagreement on the extent of risk exaggeration however.
Well, you also need to factor in the severity of the threat, as well as the risk of it happening.
Since the era of cheap international travel, there have been about 20 new flu subtypes, and one of those killed 50 million people (the Spanish flu, one of the greatest natural disasters ever), with a couple of others killing a few million. Plus, having almost everyone infected with a severe illness tends to disrupt society.
So to me that looks like there is a substantial risk (bigger than 1%) of something quite bad happening when a new subtype appears.
Given how difficult it is to predict biological systems, I think it makes sense to treat the arrival of a new flu subtype with concern and for governments to set up contingency programmes. That’s not to say that the media didn’t hype swine flu and bird flu, but that doesn’t mean that the government preparations were an overreaction.
That’s not to say that some threats aren’t exaggerated, and others (low-probability, global threats like asteroid strikes or big volcanic eruptions) don’t get enough attention.
I wouldn’t put much trust in Matt Ridley’s abilities to estimate risk:
Mr Ridley told the Treasury Select Committee on Tuesday, that the bank had been hit by “wholly unexpected” events and he defended the way he and his colleagues had been running the bank.
“We were subject to a completely unprecedented and unpredictable closure of the world credit markets,” he said.
Well, you also need to factor in the severity of the threat, as well as the risk of it happening.
Well obviously. I refer you to my previous comment. At this point our remaining disagreement on this issue is unlikely to be resolved without better data. Continuing to go back and forth repeating that I think there is a pattern of overestimation for certain types of risk and that you think the estimates are accurate is not going to resolve the question.
Maybe at first, but I clearly recall that the hype was still ongoing even after it was known that this was a milder flu-version than usual.
And the reactions were not well designed to handle the flu either. One example is that my university installed hand sanitizers, well, pretty much everywhere. But the flu is primarily transmitted not from hand-to-hand contact, but by miniature droplets when people cough, sneeze, or just talk and breathe:
Spread of the 2009 H1N1 virus is thought to occur in the same way that seasonal flu spreads. Flu viruses are spread mainly from person to person through coughing, sneezing or talking by people with influenza. Sometimes people may become infected by touching something – such as a surface or object – with flu viruses on it and then touching their mouth or nose.
Wikipedia takes a more middle-of-the-road view, noting that it’s not entirely clear how much transmission happens in which route, but still:
The length of time the virus will persist on a surface varies, with the virus surviving for one to two days on hard, non-porous surfaces such as plastic or metal, for about fifteen minutes from dry paper tissues, and only five minutes on skin.
Which really suggests to me that hand-washing (or sanitizing) just isn’t going to be terribly effective. The best preventative is making sick people stay home.
Now, regular hand-washing is a great prophylactic for many other disease pathways, of course. But not for what the supposed purpose was.
I interpret what happened with H1N1 a little differently. Before it was known how serious it would be, the media started covering it. Now even given that H1N1 was relatively harmless, it is quite likely that similar but non-harmless diseases will appear in the future, so having containment strategies and knowing what works is important. By making H1N1 sound scary, they gave countries and health organizations an incentive to test their strategies with lower consequences for failure than there would be if they had to test them on something more lethal. The reactins make a lot more sense if you look at it as a large-scale training exercise. If people knew that it was harmless, they would’ve behaved differently and lowered the validity of the test..
It isn’t fully general; it only applies when the expected benefits (from lessons learned) exceed the costs of that particular kind of drill, and there’s no cheaper way to learn the same lessons.
Are you claiming that this was actually the plan all along? That our infinitely wise and benevolent leaders decided to create a panic irrespective of the actual threat posed by H1N1 for the purposes of a realistic training exercise?
If this is not what you are suggesting are you saying that although in fact this panic was an example of general government incompetence in the field of risk management it purely coincidentally turned out to be exactly the optimal thing to do in retrospect?
I have no evidence that would let me distinguish between these two scenarios. I also note that there’s plenty of middle ground—for example, private media companies could’ve decided to create an unjustified panic for ratings, while the governments and hospitals decided to make the best of it. Or more likely, the panic developed without anyone influential making a conscious decision to promote or suppress it either way.
Just because some institutions over-reacted or implemented ineffective measures, doesn’t mean that the concern wasn’t proportionate or that effective measures weren’t also being implemented.
In the UK, the government response was to tell infected people to stay at home and away from their GPs, and provide a phone system for people to get Tamiflu. They also ran advertising telling people to cover their mouths when they sneezed (“Catch it, bin it, kill it”).
If anything, the government reaction was insufficient, because the phone system was delayed and the Tamiflu stockpiles were limited (although Tamiflu is apparently pretty marginal anyway, so making infected people stay at home was more important).
The media may have carried on hyping the threat after it turned out not to be so severe. They also ran stories complaining that the threat had been overhyped and the effort wasted. Just because the media or university administrators say stupid things about something, that doesn’t mean it’s not real.
Take the response to the avian flu outbreak in 2005. Dr David Nabarro, the UN systems coordinator for human and avian influenza, declared: ‘I’m not, at the moment, at liberty to give you a prediction on [potential mortality] numbers.’ He then gave a prediction on potential mortality numbers: ‘Let’s say, the range of deaths could be anything from five million to 150million.’ Nabarro should have kept his estimating prowess enslaved: the number of cases of avian flu stands at a mere 498, of which just 294 have proved fatal.
…
On 11 June 2009, just over a month after the initial outbreak in Mexico, the World Health Organisation finally announced that swine flu was now worthy of its highest alert status of level six, a global pandemic. Despite claims that there was no need to panic, that’s exactly what national health authorities did. In the UK, while the Department of Health was closing schools, politicians were falling over themselves to imagine the worst possible outcomes: second more deadly waves of flu, virus mutation – nothing was too far-fetched for it not to become a public announcement. This was going to be like the great Spanish Flu pandemic of 1918-20. But worse.
However, just as day follows nightmares, the dawning reality proved to be rather more mundane. By March 2010, nearly a full year after the H1N1 virus first began frightening the British government, the death toll stood not in the hundreds of thousands, but at 457. To put that into perspective, the average mortality rate for your common-or-garden flu is 600 deaths per year in a non-epidemic year and between 12,000 and 13,800 deaths per year in an epidemic year. In other words, far from heralding the imagined super virus, swine flu was more mild than the strains of flu we’ve lived with, and survived, for centuries. Reflecting on the hysteria which characterised the WHO’s response to Mexico, German politician Dr Wolfgang Wodarg told the WHO last week: ‘What we experienced in Mexico City was very mild flu which did not kill more than usual – which killed even less than usual.’
So Nabarro explicitly says that he’s talking about a possibility and not making a prediction, and ABC News reports it as a prediction. This seems consistent with the media-manufactured scare model.
Haha, ok point taken. I’m clearly wrong on this and there are a lot of examples. (At this point I’m also reminded of this Monty Python sketch although this is sort of the inverse).
I’m curious about your claim that that “intellectuals care much more about the status-signaling aspects of their opinions than the common folk.” This seems plausible to me, but I’d be curious what substantial evidence there for the claim.
I would like to define an “intellectual” as a person who I believe to be well educated and smart. Unfortunately, this definition will be deprecated as too subjective. An objective alternative definition would be to define intellectuals as a class of people who consider each other to be well educated and smart.
If that definition is accepted, then I think the claim is almost self-evident.
Interestingly enough, one thing both these examples have in common is that they are cases of intellectuals arguing that intellectuals should have more power.
It was actually pretty popular in non-intellectual circles as well, but yes that example seem to still be a decent one.
(Incidentally, I’m not actually sure what is wrong with coercive eugenics in the general sense. If for example we have the technology to ensure that some very bad alleles are not passed on (such as those for Huntington’s disease), I’m not convinced that we shouldn’t require screening or mandatory in vitro for people with the alleles. This may be one reason this example didn’t occur to me. However, I suspect that discussion of this issue in any detail could be potentially quite mind-killing given unfortunate historical connections and related issues.)
(Incidentally, I’m not actually sure what is wrong with coercive eugenics in the general sense. If for example we have the technology to ensure that some very bad alleles are not passed on (such as those for Huntington’s disease), I’m not convinced that we shouldn’t require screening or mandatory in vitro for people with the alleles. This may be one reason this example didn’t occur to me. However, I suspect that discussion of this issue in any detail could be potentially quite mind-killing given unfortunate historical connections and related issues.)
The historical meaning of the term is problematic partly because it wasn’t based on actual gene testing—I doubt they even tried to sort out whether someone’s low IQ was inheritable or caused by, say, poor nutrition—and partly because it was, and still in some cases would be, very subjective in terms of what traits are considered undesirable. How many of us wouldn’t be here if there’d been genetic tests for autism/aspergers or ADD or other neurodifferences developed before we were born?
If for example we have the technology to ensure that some very bad alleles are not passed on (such as those for Huntington’s disease), I’m not convinced that we shouldn’t require screening or mandatory in vitro for people with the alleles.
It gets much harder when you start talking about autism or deafness or any of a whole range of conditions that are abnormal but aren’t strictly disadvantageous.
Yes, and there’s been a fair bit of controversy in the “deaf community” over whether they should engage in selection for deaf children. See for example this article.
I’ve heard from more than one source that deaf parents of deaf children often take that stance—and that some deaf parents intentionally choose to have deaf children, even to the point of getting a sperm donor involved if the genetics require it.
I rather sympathize—if I ever get serious about procreating, stacking the deck in favor of having an autistic offspring will be something of a priority. (And, as I think about it, it’s for pretty much the same reason: Being deaf or autistic isn’t necessarily disadvantageous, but having parents that one has difficulty in communicating with is—and deaf people and autistic people both tend to find it easier to communicate with people who are similar.)
What do you mean by necessarily disadvantageous, then? I disagree that a difficulty in communication with parents is a more necessary disadvantage than deafness, but maybe we interpret the words differently. (I have no precise definition yet.)
Being deaf or autistic (or for that matter gay or left-handed or female or male or tall or short) is a disadvantage in some situations, but not all, and it’s possible for someone with any of the above traits to arrange their life in such a way that the trait is an advantage rather than a disadvantage, if other aspects of their life are amenable to such rearranging. (In the case of being deaf, a significant portion of the advantage seems to come from being able to be a member of the deaf community, and even then I have a little bit of trouble seeing it, but I’m inclined to believe that the deaf people who make that claim know more about the situation than I do.)
For contrast, consider being diabetic: It’s possible to arrange one’s life such that diabetes is well-controlled, but there seems to be a pretty good consensus among diabetics that it’s bad news, and I’ve never heard of anyone intentionally trying to have a diabetic child, xkcd jokes aside.
When a being is submitting a threat, through an audio-only channel, to destroy paperclips if you don’t do X, when that being prefers you doing X to destroying paperclips.
(The example generalizes to cases where you have a preference for something else instead of quantity of existent paperclips.)
So far, most of the answers are variations on being able to avoid unwanted noise or indirect effects of that ability (e.g. being able to pay less for a house because it’s in a very noisy area and most people don’t want it). There’ve also been comments about being able to get away with ignoring people, occasionally finding out things via lip-reading that the people speaking don’t think you’ll catch, and being able to use sign language in situations where spoken language is difficult or useless (in a noisy bar, while scuba diving).
These advantages look like rationalisations made by the parents in question, while I suspect (without evidence, I admit) that they simply fear their children being different. Seriously, ask a hearing person whether (s)he would accept a deafening operation in order to get away with ignoring people more easily.
Any condition can have similar “advantages”. Blind people are able to sleep during daylight, can easily ignore visual distractions, are better piano tuners on average. Should blind parents therefore have a right to make their child blind if they wish? Or should any parents be allowed to deliberately have a child without legs, because, say, there is a greater chance to succeed in the sledge hockey than in the normal one?
I get the point of campaigns aiming to move certain conditions from the category “disease” to the category “minority”. “Disease” is an emotionaly loaded term, and the people with the respective conditions may have easier life due to such campaigns. On the other hand, we mustn’t forget that they would have even more easier life without the condition.
“Disease” is an emotionaly loaded term, and the people with the respective conditions may have easier life due to such campaigns. On the other hand, we mustn’t forget that they would have even more easier life without the condition.
Obligatory Gideon’s Crossing quote:
Mother [to a black doctor who wants to give cochlear implants to her daughter]: You think that hearing people are better than deaf people. Doctor: I’m only saying it’s easier. Mother: Would your life be easier if you were white?
With that said, I agree those sound like rationalizations.
Or should any parents be allowed to deliberately have a child without legs, because, say, there is a greater chance to succeed in the sledge hockey than in the normal one?
All of which are either obtainable through merely being hearing-impaired*, wearing ear-plugs, being raised by signing parents, or simple training.
Some benefits are bogus—for example, living in a noisy area doesn’t work because the noxious noises (say, from passing trains) are low-frequency and that’s where hearing is best; even the deaf can hear/feel loud bass or whatnot.
* full disclosure: I am hearing-impaired myself, and regard with horror the infliction of deafness or hearing-impairedness on anyone, but especially children.
I have the opposite problem, so perhaps I can add some insight.
Basically, I have Yvain’s sensitivity to audio distractions, plus I have more sensitive hearing—I’ll sometimes complain about sounds that others can’t hear. (And yes, I’ll verify that it’s real by following it to the source.)
Ear plugs don’t actually work against these distractions—I’ve tried it (I can sometimes hear riveting going on from my office at work). They block out a lot of those external sounds, but then create an additional path that allows you to hear your own breathing.
I agree that I wouldn’t be better off deaf, but there is such a thing as too much hearing.
Have you tried noise cancelling headphones? I found them pretty effective for cutting out audio distractions at work (when playing music). I stopped using them because they were a little too effective—people would come and try to get my attention and I’d be completely oblivious to their presence.
I’ve tried noise-cancelling headphones, but without playing music through them, because that is itself a distraction to me. It only worked against steady, patterned background noise.
I find certain types of music less distracting than the alternative of random background noise. Trance works well for me because it is fairly repetitive and so doesn’t distract me with trying to listen to the music too closely. It also helps if I’m listening to something I’m very familiar with and with the tracks in a set order rather than on shuffle. Mix CDs are good because there are no distracting breaks between tracks.
Seconding all of this except the bit about set order rather than shuffle, which I haven’t tried—it otherwise matches the advice I was going to give. Also, songs with no words or with words in a language you don’t speak are better than songs with words, and if you don’t want or can’t tolerate explicitly noise-canceling headphones, earplugs + headphones with the music turned up very loud also works.
I dunno, i don’t agree with deaf parents deliberately selecting for deaf chldren but there is definitely a large element of trying to medicalise something that the people with the condition don’t consider to be a bad thing.
Anyway, I think Silas nailed deaf community attitudes with the comparison between being deaf and being black, the main difference being that one is considered cultural (and therefore the problem is other people’s attitudes towards it) and the other medical.
Edit: After further thought, I think I am using necessarily disadvantageous to mean that the disadvantages massively outweigh any advantages. Since being deaf gets you access to the deaf community, an awesome working memory for visual stuff, and (if you live in urban America) doesn’t ruin your life, I don’t think it’s all disadvantage.
the main difference being that one is considered cultural (and therefore the problem is other people’s attitudes towards it) and the other medical
I don’t see being black or white any more cultural than being deaf, in either case you are born that way and being raised in different culture doesn’t change that a bit. The main difference is that the problem with being black is solely result of other people’s attitudes. It is possible not to be a racist without any inconvenience, and if no people were racists, it wouldn’t be easier to be white. On the other hand, being deaf brings many difficulties even if other people lack prejudices against the deaf. Although I can imagine a society where all people voluntarily cease to use spoken language and sound signals and listen to music and whatever else may give them advantage over the deaf, such a vision is blatantly absurd. On the other hand, society (almost) without racism is a realistic option.
In an isolated community with high genetic risk of deafness (not as high as I thought—I remembered it as 1 in 6, it was actually 1 in 155), everyone knew the local sign language, deaf people weren’t isolated, and deafness wasn’t thought of as a distinguishing characteristic.
I wonder whether societies like that do as much with music as societies without a high proportion of deaf people.
Sorry, to clarify my comment about culture: I meant the problem is the surrounding culture’s atittude towards it, not the culture of the people with the possibly disadvantageous condition.
I am not advocating that the rest of society gives up spoken language (and the complaint about music is just silly), I am advocating a group’s right to do their thing provided it doesn’t harm others. And I am not convinced that trying to arrange for your genes to provide you with a deaf child qualifies as harm, any more than people on the autism spectrum hoping and trying to arrange for an autistic child qualifies as harm. Deaf and severely hearing-impaired people are going to keep being born for quite a while, since the genes come in several different flavours including both dominant and recessive types, so I would expect services for the deaf to continue as a matter of decency for the forseeable future.
I am advocating a group’s right to do their thing provided it doesn’t harm others.
Do the rights include creating new group members? What about creationists screening their children from information about evolution, or any indoctrination of children for that matter, does that qualify as harm, or is it the group’s right to do their thing? (Sorry if I sound combative, if so, it’s not my intention, only inability to formulate the question more appropriately. I am curious where do you place the border.)
I find the fact that raising someone to be creationist involves explicitly teaching them provably false things—and, in most cases, demanding that they express belief in those things to remain on good terms with their family and community—to be relevant. Having a child who’s deaf or autistic doesn’t intrinsically involve that.
(Yes, if I procreate, I intend to make a point of teaching my offspring how being autistic is useful. Even so, they’ll still be completely entitled to disagree with me about the relative goodness of being autistic compared to being neurotypical.)
This may be a bit personal, but are you concerned about having a child on the highly autistic end of the spectrum? (ie. no verbal communication, needs a carer, etc.) To me that seems like a possible cosequence of deliberately stacking the deck, and it would make me wary of doing so.
In the sense of ‘consider it a significant enough possibility that I’d make sure I was prepared for it’, yes. In the sense of ‘would consider it a horrible thing if it happened’, no. I’m not going to aim for that end of the spectrum, but I wouldn’t be upset if it worked out that way.
I’m still working that out for myself. There’s definitely a parallel between deaf people insisting that their way of life is awesome and weird religious cults doing the same. I guess I’m more sympathetic to deaf people though because once you’re deaf you may as well make the most of it, while bringing up your child religiously requires an ongoing commitment to raise them that way.
Ah, ok, I just found my boundary there. Kids brought up in a religious environment can at least make their own choice when they’re old enough, but deaf people can’t. I don’t support the deliberate creation of people with a lifelong condition that will make them a minority unless the minority condition is provably non-bad, but neither do I find the idea of more being born as horrifying as you seem to.
Ah, ok, I just found my boundary there. Kids brought up in a religious environment can at least make their own choice when they’re old enough, but deaf people can’t. I don’t support the deliberate creation of people with a lifelong condition that will make them a minority unless the minority condition is provably non-bad, but neither do I find the idea of more being born as horrifying as you seem to.
Wow, you’re drawing your boundary squarely in other people’s territory there. I would actively support others in their attempts to disempower you and violate said boundary.
Shrug. I actually have a lot of personal problems with children being taught religion (anecdotally, it appears to create a God-shaped hole and train people to look for Deep Truths, and that’s before getting into the deep end of fundamentalism) but as far as I’m concerned a large percentage of the values that parents try to teach their kids are crap. If I took a more prohibitive stance on teaching religion then I would also have to start getting a lot more upset about all the other stupid shit, plus I would be ignoring the (admittedly tangential) benefits that come from growing up in a moderate religious community.
Disclaimer: I was brought up somewhat religious and only very recently made the decision to finish deconverting (was 95% areligious before, now I finally realised that there isn’t any reason to hold on to that identity except a vague sense of guilt and obligation). So I wouldn’t be too surprised if my current opinion is based partly on an incomplete update.
I can empathise with your heritage. It sounds much like mine (where my apostasy is probably a few years older).
I, incidentally, don’t have an enormous problem with teaching religion to one’s own children. Religion per se isn’t the kind of fairy tale that does the damage. The destructive mores work at least as well in an atheistic context.
The fact that the educated and intelligent are sometimes in the wrong doesn’t mean it isn’t a good heuristic. Pretty much any heuristic is going to fail sometimes. The question is whether the heuristic is accurate (in the sense of being more often correct than not) and, if so, how accurate it is. This heuristic seems to be one where the general trend is clear. I can’t identify a single example other than Marxism in the last hundred years where the intellectual establishment has been very wrong, and even then, that’s an example where the general public in many areas also had a fair bit of support for that view.
I’m curious about your claim that that “intellectuals care much more about the status-signaling aspects of their opinions than the common folk.” This seems plausible to me, but I’d be curious what substantial evidence there for the claim.
I’m reading The Rational Optimist at the moment which has a few examples.
Malthusian ideas about impending starvation or resource exhaustion due to population growth have been popular with intellectuals for a long time but particularly so in the last 100 years. Paul Ehrlich is a well known example. He famously lost his bet with economist Julian Simon on resource scarcity. His prediction in The Population Bomb in 1968 that India would never feed itself was proved wrong that same year. These ideas were generally widely held in intellectual circles (and still are) but there is a long track record of specific predictions relating to these theories that have proved wrong.
Another case that springs to mind: it looks increasingly likely that the mainstream advice on diet as embodied in things like the USDA food guide pyramid was deeply flawed. The dominant theory in the intellectual establishment regarding the relationship between fat, cholesterol and heart disease also looks pretty shaky in light of new research and evidence.
I’d also argue that the intellectual establishment over the latter half of the twentieth century has over emphasized the blank-slate / nurture side of the nature vs. nurture debate and neglected the evidence for a genetic basis to many human differences.
Population/natural resource exhaustion related crises are a bit iffy, because it is plainly obvious that if they remain exponentially growing forever, relative to linearly growing or constant resources (like room to live on), one or the other has got to give. Mispredicting when it will happen is different from knowing that it has to happen eventually, and how could it not? Even expanding into space won’t solve the problem, since the number of planets we can reach as time goes on is smaller than exponential population growth rates and demands for resources.
There are definitely plenty of other scientifically held views that get overturned here and there, though—another example is fever, which for centuries has been considered a negative side effect of an infection, but lately it’s been found to have beneficial properties, as certain elements of your immune system function better when the temperature rises (and certain viruses function worse). http://www.newscientist.com/article/mg20727711.400-fever-friend-or-foe.html
Obviously the people disputing the wrong predictions know this. Julian Simon was just as familiar with this trivial mathematical fact as Paul Ehrlich. The fact that this knowledge led Paul Ehrlich to make bad predictions indicates that his analysis was missing something that Julian Simon was considering. Often this missing something is a basic understanding of economics.
JoshuaZ:
Well, on any issue, there will be both intellectuals and non-intellectuals on all sides in some numbers. We can only observe how particular opinions correlate with various measures of intellectual status, and how prevalent they are among people who are in the upper strata by these measures. Marxism is a good example of an unsound belief (or rather a whole complex of beliefs) that was popular among intellectuals because its basic unsoundness is no longer seriously disputable. Other significant examples from the last hundred years are unfortunately a subject of at least some ongoing controversy; most of that period is still within living memory, after all.
Still, some examples that, in my view, should not be controversial given the present state of knowledge are various highbrow economic theories that managed to lead their intellectual fans into fallacies even deeper than those of the naive folk economics, the views of human nature and behavior of the sort criticized in Steven Pinker’s The Blank Slate, and a number of foreign policy questions in which the subsequent historical developments falsified the fashionable intellectual opinion so spectacularly that the contemporary troglodyte positions ended up looking good in comparison. There are other examples I have in mind, but those are probably too close to the modern hot-button issues to be worth bringing up.
Frankly, in matters of politics and ideology, I don’t find the trend so clear. To establish the existence of such a trend, we would have to define a clear metric for the goodness of outcomes of various policies, and then discuss and evaluate various hypothetical and counterfactual scenarios of policies that have historically found, or presently find, higher or lower favor among the (suitably defined) intellectual class.
This, however, doesn’t seem feasible in practice. Neither is it possible to evaluate the overall goodness of policy outcomes in an objective or universally agreed way (except perhaps in very extreme cases), nor is it possible to construct accurate hypotheticals in matters of such immense complexity where the law of unintended consequences lurks behind every corner.
My answer is similar to the earlier comment by Perplexed: given the definition of “intellectual” I assume, the claim is self-evident, in fact almost tautological.
I define “intellectuals” as people who derive a non-negligible part of their social status—either as public personalities or within their social networks—from the fact that other people show some esteem and interest for their opinions about issues that are outside the domain of mathematical, technical, or hard-scientific knowledge, and that are a matter of some public disagreement and controversy. This definition corresponds very closely to the normal usage of the term, and it implies directly that intellectuals will have unusually high stakes in the status-signaling implications of their beliefs.
Opposition to nuclear power?
OK, but apart from Marxism, nuclear power, coercive eugenics, Christianity, psychoanalysis, and the respective importance of nature and nurture—when has the intellectual establishment ever been an unreliable guide to finding truth?
Come to think of it, one thing I’m surprised nobody mentioned is the present neglect of technology-related existential risks.
Yeah, that provides some more examples. The elite was very worried about existential risks from nuclear war (“The Fate of the Earth”), resource shortages and mass starvation (“Club of Rome”), and technology-based totalitarianism (“1984”). Now, having been embarrassed by falling for too many cries of wolf (or at least, for worrying prematurely), they are wary of being burned again.
I don’t think worrying about nuclear war during the Cold War constituted either “crying wolf” or worrying prematurely. The Cuban Missile Crisis, the Able Archer 83 exercise (a year after “The Fate of the Earth” was published), and various false alert incidents could have resulted in nuclear war, and I’m not sure why anyone who opposed nuclear weapons at the time would be “embarrassed” in the light of what we now know.
I don’t think an existential risk has to be a certainty for it to be worth taking seriously.
In the US, concerns about some technology risks like EMP attacks and nuclear terrorism are still taken seriously, even though these are probably unlikely to happen and the damage would be much less severe than a nuclear war.
I agree. And nuclear war was certainly a risk that was worth taking seriously at the time.
However, that doesn’t make my last sentence any less true, especially if you replace “embarrassed” with “exhausted”. The risk of a nuclear war, somewhere, some time within the next 100 years, is still high—more likely than not, I would guess. It probably won’t destroy the human race, or even modern technology, but it could easily cost 400 million human lives. Yet, in part because people have become tired of worrying about such things, having already worried for decades, no one seems to be doing much about this danger.
When you say that no one seems to be doing much, are you sure that’s not just because the efforts don’t get much publicity?
There is a lot that’s being done:
Most nuclear-armed governments have massively reduced their nuclear weapon stockpiles, and try to stop other countries getting nuclear weapons. There’s an international effort to track fissile material.
After the Cold War ended, the west set up programmes to employ Soviet nuclear scientists which have run until today (Russia is about to end them).
South Africa had nuclear weapons, then gave them up.
Israel destroyed the Iraqi and Syrian nuclear programmes with airstrikes. OK, self-interested, but existing nuclear states stop their enemies getting nuclear weapons then it reduces the risk of a nuclear war.
Somebody wrote the Stuxnet worm to attack Iran’s enrichment facilities (probably) and Iran is under massive international pressure not to develop nuclear weapons.
Western leaders are at least talking about the goal of a world without nuclear weapons. OK, probably empty rhetoric.
India and Pakistan have reduced the tension between them, and now keep their nuclear weapons stored disassembled.
The US is developing missile defences to deter ‘rogue states’ who might have a limited nuclear missile capability (although I’m not sure why the threat of nuclear retaliation isn’t a better deterrent than shooting down missiles). The Western world is paranoid about nuclear terrorism, even putting nuclear detectors in its ports to try to detect weapons being smuggled into the country (which a lot of experts think is silly, but I guess it might make it harder to move fissile material around on the black market).
etc. etc.
Sure, in the 100 year timeframe, there is still a risk. It just seems like a world with two ideologically opposed nuclear-armed superpowers, with limited ways to gather information and their arsenals on a hair trigger, was much riskier than today’s situation. Even when “rogue states” get hold of nuclear weapons, they seem to want them to deter a US/UN invasion, rather than to actually use offensively.
Plus we invented the internet—greatly strengthening international relations—and creating social and economic interdependency.
This doesn’t appear to be the case at all. There are a variety of claimed existential risks which the intellectual elite are in general quite worried about. They just don’t overlap much with the kind of risks people here talk about. Global warming is an obvious example (and some people here probably think they’re right on that one) but the overhyped fears of SARS and H1N1 killing millions of people look like recent examples of lessons about crying wolf not being learned.
I don’t know about SARS, but in the case of H1N1 it wasn’t “crying wolf” so much as being prepared for a potential pandemic which didn’t happen. I mean, very severe global flu pandemics have happened before. Just because H1N1 didn’t become as virulent as expected doesn’t mean that preparing for that eventuality was a waste of time.
Obviously the crux of the issue is whether the official probability estimates and predictions for these types of threats are accurate or not. It’s difficult to judge this in any individual case that fails to develop into a serious problem but if you can observe a consistent ongoing pattern of dire predictions that do not pan out this is evidence of an underlying bias in the estimates of risk. Preparing for an eventuality as if it had a 10% probability of happening when the true risk is 1% will lead to serious mis-allocation of resources.
It looks to me like there is a consistent pattern of overstating the risks of various catastrophes. Rigorously proving this is difficult. I’ve pointed to some examples of what look like over-confident predictions of disaster (there’s lots more in The Rational Optimist). I’m not sure we can easily resolve any remaining disagreement on the extent of risk exaggeration however.
Well, you also need to factor in the severity of the threat, as well as the risk of it happening.
Since the era of cheap international travel, there have been about 20 new flu subtypes, and one of those killed 50 million people (the Spanish flu, one of the greatest natural disasters ever), with a couple of others killing a few million. Plus, having almost everyone infected with a severe illness tends to disrupt society.
So to me that looks like there is a substantial risk (bigger than 1%) of something quite bad happening when a new subtype appears.
Given how difficult it is to predict biological systems, I think it makes sense to treat the arrival of a new flu subtype with concern and for governments to set up contingency programmes. That’s not to say that the media didn’t hype swine flu and bird flu, but that doesn’t mean that the government preparations were an overreaction.
That’s not to say that some threats aren’t exaggerated, and others (low-probability, global threats like asteroid strikes or big volcanic eruptions) don’t get enough attention.
I wouldn’t put much trust in Matt Ridley’s abilities to estimate risk:
http://news.bbc.co.uk/1/hi/7052828.stm (yes, it’s the same Matt Ridley)
Well obviously. I refer you to my previous comment. At this point our remaining disagreement on this issue is unlikely to be resolved without better data. Continuing to go back and forth repeating that I think there is a pattern of overestimation for certain types of risk and that you think the estimates are accurate is not going to resolve the question.
Maybe at first, but I clearly recall that the hype was still ongoing even after it was known that this was a milder flu-version than usual.
And the reactions were not well designed to handle the flu either. One example is that my university installed hand sanitizers, well, pretty much everywhere. But the flu is primarily transmitted not from hand-to-hand contact, but by miniature droplets when people cough, sneeze, or just talk and breathe:
http://www.cdc.gov/h1n1flu/qa.htm
Wikipedia takes a more middle-of-the-road view, noting that it’s not entirely clear how much transmission happens in which route, but still:
http://en.wikipedia.org/wiki/Influenza
Which really suggests to me that hand-washing (or sanitizing) just isn’t going to be terribly effective. The best preventative is making sick people stay home.
Now, regular hand-washing is a great prophylactic for many other disease pathways, of course. But not for what the supposed purpose was.
I interpret what happened with H1N1 a little differently. Before it was known how serious it would be, the media started covering it. Now even given that H1N1 was relatively harmless, it is quite likely that similar but non-harmless diseases will appear in the future, so having containment strategies and knowing what works is important. By making H1N1 sound scary, they gave countries and health organizations an incentive to test their strategies with lower consequences for failure than there would be if they had to test them on something more lethal. The reactins make a lot more sense if you look at it as a large-scale training exercise. If people knew that it was harmless, they would’ve behaved differently and lowered the validity of the test..
This looks like a fully general argument for panicking about anything.
It isn’t fully general; it only applies when the expected benefits (from lessons learned) exceed the costs of that particular kind of drill, and there’s no cheaper way to learn the same lessons.
Are you claiming that this was actually the plan all along? That our infinitely wise and benevolent leaders decided to create a panic irrespective of the actual threat posed by H1N1 for the purposes of a realistic training exercise?
If this is not what you are suggesting are you saying that although in fact this panic was an example of general government incompetence in the field of risk management it purely coincidentally turned out to be exactly the optimal thing to do in retrospect?
I have no evidence that would let me distinguish between these two scenarios. I also note that there’s plenty of middle ground—for example, private media companies could’ve decided to create an unjustified panic for ratings, while the governments and hospitals decided to make the best of it. Or more likely, the panic developed without anyone influential making a conscious decision to promote or suppress it either way.
Just because some institutions over-reacted or implemented ineffective measures, doesn’t mean that the concern wasn’t proportionate or that effective measures weren’t also being implemented.
In the UK, the government response was to tell infected people to stay at home and away from their GPs, and provide a phone system for people to get Tamiflu. They also ran advertising telling people to cover their mouths when they sneezed (“Catch it, bin it, kill it”).
If anything, the government reaction was insufficient, because the phone system was delayed and the Tamiflu stockpiles were limited (although Tamiflu is apparently pretty marginal anyway, so making infected people stay at home was more important).
The media may have carried on hyping the threat after it turned out not to be so severe. They also ran stories complaining that the threat had been overhyped and the effort wasted. Just because the media or university administrators say stupid things about something, that doesn’t mean it’s not real.
SARS and H1N1 both looked like media-manufactured scares, rather than actual concern from the intellectual elite.
It wasn’t just the media:
So Nabarro explicitly says that he’s talking about a possibility and not making a prediction, and ABC News reports it as a prediction. This seems consistent with the media-manufactured scare model.
Haha, ok point taken. I’m clearly wrong on this and there are a lot of examples. (At this point I’m also reminded of this Monty Python sketch although this is sort of the inverse).
I would like to define an “intellectual” as a person who I believe to be well educated and smart. Unfortunately, this definition will be deprecated as too subjective. An objective alternative definition would be to define intellectuals as a class of people who consider each other to be well educated and smart.
If that definition is accepted, then I think the claim is almost self-evident.
Coercive eugenics was very popular in intellectual circles until WWII.
Interestingly enough, one thing both these examples have in common is that they are cases of intellectuals arguing that intellectuals should have more power.
It was actually pretty popular in non-intellectual circles as well, but yes that example seem to still be a decent one.
(Incidentally, I’m not actually sure what is wrong with coercive eugenics in the general sense. If for example we have the technology to ensure that some very bad alleles are not passed on (such as those for Huntington’s disease), I’m not convinced that we shouldn’t require screening or mandatory in vitro for people with the alleles. This may be one reason this example didn’t occur to me. However, I suspect that discussion of this issue in any detail could be potentially quite mind-killing given unfortunate historical connections and related issues.)
The historical meaning of the term is problematic partly because it wasn’t based on actual gene testing—I doubt they even tried to sort out whether someone’s low IQ was inheritable or caused by, say, poor nutrition—and partly because it was, and still in some cases would be, very subjective in terms of what traits are considered undesirable. How many of us wouldn’t be here if there’d been genetic tests for autism/aspergers or ADD or other neurodifferences developed before we were born?
It gets much harder when you start talking about autism or deafness or any of a whole range of conditions that are abnormal but aren’t strictly disadvantageous.
Are there people who, having a deaf newborn child, would refuse to cure the condition based on argument that deafness is not strictly disadvantageous?
Yes, and there’s been a fair bit of controversy in the “deaf community” over whether they should engage in selection for deaf children. See for example this article.
I’ve heard from more than one source that deaf parents of deaf children often take that stance—and that some deaf parents intentionally choose to have deaf children, even to the point of getting a sperm donor involved if the genetics require it.
I rather sympathize—if I ever get serious about procreating, stacking the deck in favor of having an autistic offspring will be something of a priority. (And, as I think about it, it’s for pretty much the same reason: Being deaf or autistic isn’t necessarily disadvantageous, but having parents that one has difficulty in communicating with is—and deaf people and autistic people both tend to find it easier to communicate with people who are similar.)
What do you mean by necessarily disadvantageous, then? I disagree that a difficulty in communication with parents is a more necessary disadvantage than deafness, but maybe we interpret the words differently. (I have no precise definition yet.)
Being deaf or autistic (or for that matter gay or left-handed or female or male or tall or short) is a disadvantage in some situations, but not all, and it’s possible for someone with any of the above traits to arrange their life in such a way that the trait is an advantage rather than a disadvantage, if other aspects of their life are amenable to such rearranging. (In the case of being deaf, a significant portion of the advantage seems to come from being able to be a member of the deaf community, and even then I have a little bit of trouble seeing it, but I’m inclined to believe that the deaf people who make that claim know more about the situation than I do.)
For contrast, consider being diabetic: It’s possible to arrange one’s life such that diabetes is well-controlled, but there seems to be a pretty good consensus among diabetics that it’s bad news, and I’ve never heard of anyone intentionally trying to have a diabetic child, xkcd jokes aside.
In what situations is being deaf an advantage?
When a being is submitting a threat, through an audio-only channel, to destroy paperclips if you don’t do X, when that being prefers you doing X to destroying paperclips.
(The example generalizes to cases where you have a preference for something else instead of quantity of existent paperclips.)
/handwaves appeal to UDT/TDT/CDT/*DT
And by allowing yourself to remain deaf, you have defected and acausally forced other beings to defect, rendering you worse off.
Wakarimasen.
I don’t understand.
Exactly.
I’m researching this.
So far, most of the answers are variations on being able to avoid unwanted noise or indirect effects of that ability (e.g. being able to pay less for a house because it’s in a very noisy area and most people don’t want it). There’ve also been comments about being able to get away with ignoring people, occasionally finding out things via lip-reading that the people speaking don’t think you’ll catch, and being able to use sign language in situations where spoken language is difficult or useless (in a noisy bar, while scuba diving).
I’m still looking; there may be more.
These advantages look like rationalisations made by the parents in question, while I suspect (without evidence, I admit) that they simply fear their children being different. Seriously, ask a hearing person whether (s)he would accept a deafening operation in order to get away with ignoring people more easily.
Any condition can have similar “advantages”. Blind people are able to sleep during daylight, can easily ignore visual distractions, are better piano tuners on average. Should blind parents therefore have a right to make their child blind if they wish? Or should any parents be allowed to deliberately have a child without legs, because, say, there is a greater chance to succeed in the sledge hockey than in the normal one?
I get the point of campaigns aiming to move certain conditions from the category “disease” to the category “minority”. “Disease” is an emotionaly loaded term, and the people with the respective conditions may have easier life due to such campaigns. On the other hand, we mustn’t forget that they would have even more easier life without the condition.
Obligatory Gideon’s Crossing quote:
Mother [to a black doctor who wants to give cochlear implants to her daughter]: You think that hearing people are better than deaf people.
Doctor: I’m only saying it’s easier.
Mother: Would your life be easier if you were white?
With that said, I agree those sound like rationalizations.
Also airplane dogfights, I’m given to understand.
All of which are either obtainable through merely being hearing-impaired*, wearing ear-plugs, being raised by signing parents, or simple training.
Some benefits are bogus—for example, living in a noisy area doesn’t work because the noxious noises (say, from passing trains) are low-frequency and that’s where hearing is best; even the deaf can hear/feel loud bass or whatnot.
* full disclosure: I am hearing-impaired myself, and regard with horror the infliction of deafness or hearing-impairedness on anyone, but especially children.
I have the opposite problem, so perhaps I can add some insight.
Basically, I have Yvain’s sensitivity to audio distractions, plus I have more sensitive hearing—I’ll sometimes complain about sounds that others can’t hear. (And yes, I’ll verify that it’s real by following it to the source.)
Ear plugs don’t actually work against these distractions—I’ve tried it (I can sometimes hear riveting going on from my office at work). They block out a lot of those external sounds, but then create an additional path that allows you to hear your own breathing.
I agree that I wouldn’t be better off deaf, but there is such a thing as too much hearing.
Have you tried simplynoise.com? For me, their Brown noise generator is the best thing for eliminating sound distractions.
I’ll have to give that a try, thanks.
Have you tried noise cancelling headphones? I found them pretty effective for cutting out audio distractions at work (when playing music). I stopped using them because they were a little too effective—people would come and try to get my attention and I’d be completely oblivious to their presence.
I’ve tried noise-cancelling headphones, but without playing music through them, because that is itself a distraction to me. It only worked against steady, patterned background noise.
I find certain types of music less distracting than the alternative of random background noise. Trance works well for me because it is fairly repetitive and so doesn’t distract me with trying to listen to the music too closely. It also helps if I’m listening to something I’m very familiar with and with the tracks in a set order rather than on shuffle. Mix CDs are good because there are no distracting breaks between tracks.
Seconding all of this except the bit about set order rather than shuffle, which I haven’t tried—it otherwise matches the advice I was going to give. Also, songs with no words or with words in a language you don’t speak are better than songs with words, and if you don’t want or can’t tolerate explicitly noise-canceling headphones, earplugs + headphones with the music turned up very loud also works.
Fair enough. It would be surprising if everyone had exactly optimal hearing.
I dunno, i don’t agree with deaf parents deliberately selecting for deaf chldren but there is definitely a large element of trying to medicalise something that the people with the condition don’t consider to be a bad thing.
Anyway, I think Silas nailed deaf community attitudes with the comparison between being deaf and being black, the main difference being that one is considered cultural (and therefore the problem is other people’s attitudes towards it) and the other medical.
Edit: After further thought, I think I am using necessarily disadvantageous to mean that the disadvantages massively outweigh any advantages. Since being deaf gets you access to the deaf community, an awesome working memory for visual stuff, and (if you live in urban America) doesn’t ruin your life, I don’t think it’s all disadvantage.
I don’t see being black or white any more cultural than being deaf, in either case you are born that way and being raised in different culture doesn’t change that a bit. The main difference is that the problem with being black is solely result of other people’s attitudes. It is possible not to be a racist without any inconvenience, and if no people were racists, it wouldn’t be easier to be white. On the other hand, being deaf brings many difficulties even if other people lack prejudices against the deaf. Although I can imagine a society where all people voluntarily cease to use spoken language and sound signals and listen to music and whatever else may give them advantage over the deaf, such a vision is blatantly absurd. On the other hand, society (almost) without racism is a realistic option.
Everyone Here Spoke Sign Language: Hereditary Deafness in Martha’s Vineyard
In an isolated community with high genetic risk of deafness (not as high as I thought—I remembered it as 1 in 6, it was actually 1 in 155), everyone knew the local sign language, deaf people weren’t isolated, and deafness wasn’t thought of as a distinguishing characteristic.
I wonder whether societies like that do as much with music as societies without a high proportion of deaf people.
Sorry, to clarify my comment about culture: I meant the problem is the surrounding culture’s atittude towards it, not the culture of the people with the possibly disadvantageous condition.
I am not advocating that the rest of society gives up spoken language (and the complaint about music is just silly), I am advocating a group’s right to do their thing provided it doesn’t harm others. And I am not convinced that trying to arrange for your genes to provide you with a deaf child qualifies as harm, any more than people on the autism spectrum hoping and trying to arrange for an autistic child qualifies as harm. Deaf and severely hearing-impaired people are going to keep being born for quite a while, since the genes come in several different flavours including both dominant and recessive types, so I would expect services for the deaf to continue as a matter of decency for the forseeable future.
Do the rights include creating new group members? What about creationists screening their children from information about evolution, or any indoctrination of children for that matter, does that qualify as harm, or is it the group’s right to do their thing? (Sorry if I sound combative, if so, it’s not my intention, only inability to formulate the question more appropriately. I am curious where do you place the border.)
I find the fact that raising someone to be creationist involves explicitly teaching them provably false things—and, in most cases, demanding that they express belief in those things to remain on good terms with their family and community—to be relevant. Having a child who’s deaf or autistic doesn’t intrinsically involve that.
(Yes, if I procreate, I intend to make a point of teaching my offspring how being autistic is useful. Even so, they’ll still be completely entitled to disagree with me about the relative goodness of being autistic compared to being neurotypical.)
This may be a bit personal, but are you concerned about having a child on the highly autistic end of the spectrum? (ie. no verbal communication, needs a carer, etc.) To me that seems like a possible cosequence of deliberately stacking the deck, and it would make me wary of doing so.
In the sense of ‘consider it a significant enough possibility that I’d make sure I was prepared for it’, yes. In the sense of ‘would consider it a horrible thing if it happened’, no. I’m not going to aim for that end of the spectrum, but I wouldn’t be upset if it worked out that way.
I’m still working that out for myself. There’s definitely a parallel between deaf people insisting that their way of life is awesome and weird religious cults doing the same. I guess I’m more sympathetic to deaf people though because once you’re deaf you may as well make the most of it, while bringing up your child religiously requires an ongoing commitment to raise them that way.
Ah, ok, I just found my boundary there. Kids brought up in a religious environment can at least make their own choice when they’re old enough, but deaf people can’t. I don’t support the deliberate creation of people with a lifelong condition that will make them a minority unless the minority condition is provably non-bad, but neither do I find the idea of more being born as horrifying as you seem to.
Wow, you’re drawing your boundary squarely in other people’s territory there. I would actively support others in their attempts to disempower you and violate said boundary.
Shrug. I actually have a lot of personal problems with children being taught religion (anecdotally, it appears to create a God-shaped hole and train people to look for Deep Truths, and that’s before getting into the deep end of fundamentalism) but as far as I’m concerned a large percentage of the values that parents try to teach their kids are crap. If I took a more prohibitive stance on teaching religion then I would also have to start getting a lot more upset about all the other stupid shit, plus I would be ignoring the (admittedly tangential) benefits that come from growing up in a moderate religious community.
Disclaimer: I was brought up somewhat religious and only very recently made the decision to finish deconverting (was 95% areligious before, now I finally realised that there isn’t any reason to hold on to that identity except a vague sense of guilt and obligation). So I wouldn’t be too surprised if my current opinion is based partly on an incomplete update.
I can empathise with your heritage. It sounds much like mine (where my apostasy is probably a few years older).
I, incidentally, don’t have an enormous problem with teaching religion to one’s own children. Religion per se isn’t the kind of fairy tale that does the damage. The destructive mores work at least as well in an atheistic context.