It would be interesting to know exactly what Pinker wrote. For instance, imagine that he wrote something like this:
What Yang does here is a mild example of the argument from authority. It may be true that 1⁄3 of Americans will lose their jobs to automation in 12 years, and if it’s true that the smartest people in the world expect that then that should make us think it more likely than we did before. But it’s a long way short of proof—the smartest people in the world may still not have much actual ability to predict what will happen 12 years out. And Yang never says which smartest people in the world, or how he knows that they’re the smartest, or what sort of smart, all of which could make a big difference to how much weight we give to their opinion. (Maybe the smartest people in the world are all theoretical physicists and don’t actually know anything about the economy. Maybe the people Yang thinks are the smartest in the world are the people who seem smart to him because they say a lot of things he agrees with. Maybe he has no idea who the smartest people in the world are and he’s just bullshitting. Or maybe he means that a selection of top economists, technologists and the like sat down and made a serious attempt to predict likely futures for technological development and their impact on the economy, and they estimate that 1⁄3 of Americans will lose their jobs to automation. These scenarios are not all alike.) Some arguments from authority are stronger than this, some are weaker, but the weaknesses here are typical: it’s not clear exactly what authority is being cited, it’s not clear exactly how expert that authority is, and even the most expert it could plausibly be isn’t enough to justify very much confidence in what they say.
I wouldn’t find that damning evidence against Pinker’s expertise in rationality.
FWIW, this was the full paragraph that I pulled the quote from,
The “smartest people in the world” claim from the Yang Gang is a mild example of the argument from authority. The authority being deferred to is often religious, as in the gospel song and bumper sticker “God said it, I believe it, that settles it.” But it can also be political or academic. Intellectual cliques often revolve around a guru whose pronouncements become secular gospel. Many academic disquisitions begin, “As Derrida has taught us . . .”—or Foucault, or Butler, or Marx, or Freud, or Chomsky. Good scientists disavow this way of talking, but they are sometimes raised up as authorities by others. I often get letters taking me to task for worrying about human-caused climate change because, they note, this brilliant physicist or that Nobel laureate denies it. But Einstein was not the only scientific authority whose opinions outside his area of expertise were less than authoritative. In their article “The Nobel Disease: When Intelligence Fails to Protect against Irrationality,” Scott Lilienfeld and his colleagues list the flaky beliefs of a dozen science laureates, including eugenics, megavitamins, telepathy, homeopathy, astrology, herbalism, synchronicity, race pseudoscience, cold fusion, crank autism treatments, and denying that AIDS is caused by HIV.
I maintain that the point is stupid. Putting aside that ‘smart’ could entail rationality, the bigger issue is that Steven argues [it’s non-conclusive] ⇒ [it’s a fallacy]. Certainly intelligent people can believe false things, but as long as intelligence correlates with being accurate, what Yang is is still Bayesian evidence, just as you said in the review.
And it’s not hard to find better examples for irrationality. Julia Galef’s book contains a bunch of those, and they’re all clear-cut.
If all the information you had about a person was some.very generic information about their IQ or rationality quotient,your best option would be to believe the person who has the highest.
But that is almost never the case. Experts have certificates indicating their domaon specific knowledge.
Would you want a random person with an IQ of 180 performing a surgical operation on you?
what Yang is is still Bayesian evidence
But very weak Bayesian evidence. The human brain can’t physically deal with every small.or very large quantities. You’re much better off disregarding very weak evidence.
You’re much better off disregarding very weak evidence.
Yes. This shouldn’t be confused with regarding it as lack of evidence. Occasionally you are better off making use of it after all, if the margins for a decision are slim. Evidence is never itself fallacious, the error is its misrepresentation as something that it isn’t, and an act of labeling something as a fallacy can itself easily fall into such a fallacy, for example implying that weak evidence of certain form should be seen as lack of evidence or as counterevidence.
Would you want a random person with an IQ of 180 performing a surgical operation on you?
If we didn’t have professional surgeons (and I need the surgery), then yes, and we don’t have something analogous to professional surgeons for predicting the future. (Maybe superforecasters, but that standard is definitely not relevant if we’re comparing Yang to the average politician.)
We do have people with expertise relevant to making the sort of prediction Yang’s talking about, though. For instance:
AI researchers probably have a better idea than randomly chosen very smart people of what the state of AI is likely to be a decade from now.
Economists probably have a better idea than randomly chosen very smart people of whether the likely outcome of a given level of AI progress looks more like “oh noes 1⁄3 of all Americans have lost their jobs” or “1/3 of all Americans find that the nature of their work has changed” or “no one actually needs a job any more”.
Damn! If only I’d listened to AI researchers five years ago.
(I know what you meant :-).)
Yes, it’s true that AI researchers’ greater expertise is to some extent counterbalanced by possible biases.I still think it’s likely that a typical eminent AI researcher has a better idea of the likely level of technological obsolescence over the next ~decade than a typical randomly chosen person with (say) an IQ over 160.
(I don’t think corresponding things are always true. For instance, I am not at all convinced that a randomly chosen eminent philosopher-of-religion has a better idea on average of whether there are any gods and what they’re like if so than a randomly chosen very clever person. I think it depends on how much real expertise is possible in a given field. In AI there’s quite a lot.)
I agree that people who are both AI experts and truck drivers (or executives at truck-driving companies) will have a better idea of how many Americans will lose their truck-driving jobs because they get automated away, and likewise for other jobs.
Relatively few people are expert both in AI and in other fields at risk of getting automated away. I think having just expertise in AI gives you a better chance than having neither. I don’t know who Yang’s “smartest people” actually were, but if they aren’t people with either specific AI expertise, or specific expertise in areas likely to fall victim to automation, or maybe expertise in labor economics, then I think their pronouncements about how many Americans are going to lose their jobs to automation in the near future are themselves examples of the phenomenon that xkcd comic is pointing at.
(Also, e.g., truck drivers may well have characteristic biases when they talk about the likely state of the truck driving industry a decade from now, just as AI researchers may.)
It would be interesting to know exactly what Pinker wrote. For instance, imagine that he wrote something like this:
I wouldn’t find that damning evidence against Pinker’s expertise in rationality.
FWIW, this was the full paragraph that I pulled the quote from,
I maintain that the point is stupid. Putting aside that ‘smart’ could entail rationality, the bigger issue is that Steven argues [it’s non-conclusive] ⇒ [it’s a fallacy]. Certainly intelligent people can believe false things, but as long as intelligence correlates with being accurate, what Yang is is still Bayesian evidence, just as you said in the review.
And it’s not hard to find better examples for irrationality. Julia Galef’s book contains a bunch of those, and they’re all clear-cut.
If all the information you had about a person was some.very generic information about their IQ or rationality quotient,your best option would be to believe the person who has the highest.
But that is almost never the case. Experts have certificates indicating their domaon specific knowledge.
Would you want a random person with an IQ of 180 performing a surgical operation on you?
But very weak Bayesian evidence. The human brain can’t physically deal with every small.or very large quantities. You’re much better off disregarding very weak evidence.
Yes. This shouldn’t be confused with regarding it as lack of evidence. Occasionally you are better off making use of it after all, if the margins for a decision are slim. Evidence is never itself fallacious, the error is its misrepresentation as something that it isn’t, and an act of labeling something as a fallacy can itself easily fall into such a fallacy, for example implying that weak evidence of certain form should be seen as lack of evidence or as counterevidence.
If we didn’t have professional surgeons (and I need the surgery), then yes, and we don’t have something analogous to professional surgeons for predicting the future. (Maybe superforecasters, but that standard is definitely not relevant if we’re comparing Yang to the average politician.)
We do have people with expertise relevant to making the sort of prediction Yang’s talking about, though. For instance:
AI researchers probably have a better idea than randomly chosen very smart people of what the state of AI is likely to be a decade from now.
Economists probably have a better idea than randomly chosen very smart people of whether the likely outcome of a given level of AI progress looks more like “oh noes 1⁄3 of all Americans have lost their jobs” or “1/3 of all Americans find that the nature of their work has changed” or “no one actually needs a job any more”.
As a field AI researchers are in AI research because they believe in it’s potential. While they do have expertise there’s also a bias.
If you would have listened to AI researchers five years ago we would have driverless cars by now.
Damn! If only I’d listened to AI researchers five years ago.
(I know what you meant :-).)
Yes, it’s true that AI researchers’ greater expertise is to some extent counterbalanced by possible biases.I still think it’s likely that a typical eminent AI researcher has a better idea of the likely level of technological obsolescence over the next ~decade than a typical randomly chosen person with (say) an IQ over 160.
(I don’t think corresponding things are always true. For instance, I am not at all convinced that a randomly chosen eminent philosopher-of-religion has a better idea on average of whether there are any gods and what they’re like if so than a randomly chosen very clever person. I think it depends on how much real expertise is possible in a given field. In AI there’s quite a lot.)
Knowing whether AI will make a field obsolate takes both expertise of AI and expertise of the given field.
There’s an xkcd for that https://xkcd.com/793/
I agree that people who are both AI experts and truck drivers (or executives at truck-driving companies) will have a better idea of how many Americans will lose their truck-driving jobs because they get automated away, and likewise for other jobs.
Relatively few people are expert both in AI and in other fields at risk of getting automated away. I think having just expertise in AI gives you a better chance than having neither. I don’t know who Yang’s “smartest people” actually were, but if they aren’t people with either specific AI expertise, or specific expertise in areas likely to fall victim to automation, or maybe expertise in labor economics, then I think their pronouncements about how many Americans are going to lose their jobs to automation in the near future are themselves examples of the phenomenon that xkcd comic is pointing at.
(Also, e.g., truck drivers may well have characteristic biases when they talk about the likely state of the truck driving industry a decade from now, just as AI researchers may.)