If all the information you had about a person was some.very generic information about their IQ or rationality quotient,your best option would be to believe the person who has the highest.
But that is almost never the case. Experts have certificates indicating their domaon specific knowledge.
Would you want a random person with an IQ of 180 performing a surgical operation on you?
what Yang is is still Bayesian evidence
But very weak Bayesian evidence. The human brain can’t physically deal with every small.or very large quantities. You’re much better off disregarding very weak evidence.
You’re much better off disregarding very weak evidence.
Yes. This shouldn’t be confused with regarding it as lack of evidence. Occasionally you are better off making use of it after all, if the margins for a decision are slim. Evidence is never itself fallacious, the error is its misrepresentation as something that it isn’t, and an act of labeling something as a fallacy can itself easily fall into such a fallacy, for example implying that weak evidence of certain form should be seen as lack of evidence or as counterevidence.
Would you want a random person with an IQ of 180 performing a surgical operation on you?
If we didn’t have professional surgeons (and I need the surgery), then yes, and we don’t have something analogous to professional surgeons for predicting the future. (Maybe superforecasters, but that standard is definitely not relevant if we’re comparing Yang to the average politician.)
We do have people with expertise relevant to making the sort of prediction Yang’s talking about, though. For instance:
AI researchers probably have a better idea than randomly chosen very smart people of what the state of AI is likely to be a decade from now.
Economists probably have a better idea than randomly chosen very smart people of whether the likely outcome of a given level of AI progress looks more like “oh noes 1⁄3 of all Americans have lost their jobs” or “1/3 of all Americans find that the nature of their work has changed” or “no one actually needs a job any more”.
Damn! If only I’d listened to AI researchers five years ago.
(I know what you meant :-).)
Yes, it’s true that AI researchers’ greater expertise is to some extent counterbalanced by possible biases.I still think it’s likely that a typical eminent AI researcher has a better idea of the likely level of technological obsolescence over the next ~decade than a typical randomly chosen person with (say) an IQ over 160.
(I don’t think corresponding things are always true. For instance, I am not at all convinced that a randomly chosen eminent philosopher-of-religion has a better idea on average of whether there are any gods and what they’re like if so than a randomly chosen very clever person. I think it depends on how much real expertise is possible in a given field. In AI there’s quite a lot.)
I agree that people who are both AI experts and truck drivers (or executives at truck-driving companies) will have a better idea of how many Americans will lose their truck-driving jobs because they get automated away, and likewise for other jobs.
Relatively few people are expert both in AI and in other fields at risk of getting automated away. I think having just expertise in AI gives you a better chance than having neither. I don’t know who Yang’s “smartest people” actually were, but if they aren’t people with either specific AI expertise, or specific expertise in areas likely to fall victim to automation, or maybe expertise in labor economics, then I think their pronouncements about how many Americans are going to lose their jobs to automation in the near future are themselves examples of the phenomenon that xkcd comic is pointing at.
(Also, e.g., truck drivers may well have characteristic biases when they talk about the likely state of the truck driving industry a decade from now, just as AI researchers may.)
If all the information you had about a person was some.very generic information about their IQ or rationality quotient,your best option would be to believe the person who has the highest.
But that is almost never the case. Experts have certificates indicating their domaon specific knowledge.
Would you want a random person with an IQ of 180 performing a surgical operation on you?
But very weak Bayesian evidence. The human brain can’t physically deal with every small.or very large quantities. You’re much better off disregarding very weak evidence.
Yes. This shouldn’t be confused with regarding it as lack of evidence. Occasionally you are better off making use of it after all, if the margins for a decision are slim. Evidence is never itself fallacious, the error is its misrepresentation as something that it isn’t, and an act of labeling something as a fallacy can itself easily fall into such a fallacy, for example implying that weak evidence of certain form should be seen as lack of evidence or as counterevidence.
If we didn’t have professional surgeons (and I need the surgery), then yes, and we don’t have something analogous to professional surgeons for predicting the future. (Maybe superforecasters, but that standard is definitely not relevant if we’re comparing Yang to the average politician.)
We do have people with expertise relevant to making the sort of prediction Yang’s talking about, though. For instance:
AI researchers probably have a better idea than randomly chosen very smart people of what the state of AI is likely to be a decade from now.
Economists probably have a better idea than randomly chosen very smart people of whether the likely outcome of a given level of AI progress looks more like “oh noes 1⁄3 of all Americans have lost their jobs” or “1/3 of all Americans find that the nature of their work has changed” or “no one actually needs a job any more”.
As a field AI researchers are in AI research because they believe in it’s potential. While they do have expertise there’s also a bias.
If you would have listened to AI researchers five years ago we would have driverless cars by now.
Damn! If only I’d listened to AI researchers five years ago.
(I know what you meant :-).)
Yes, it’s true that AI researchers’ greater expertise is to some extent counterbalanced by possible biases.I still think it’s likely that a typical eminent AI researcher has a better idea of the likely level of technological obsolescence over the next ~decade than a typical randomly chosen person with (say) an IQ over 160.
(I don’t think corresponding things are always true. For instance, I am not at all convinced that a randomly chosen eminent philosopher-of-religion has a better idea on average of whether there are any gods and what they’re like if so than a randomly chosen very clever person. I think it depends on how much real expertise is possible in a given field. In AI there’s quite a lot.)
Knowing whether AI will make a field obsolate takes both expertise of AI and expertise of the given field.
There’s an xkcd for that https://xkcd.com/793/
I agree that people who are both AI experts and truck drivers (or executives at truck-driving companies) will have a better idea of how many Americans will lose their truck-driving jobs because they get automated away, and likewise for other jobs.
Relatively few people are expert both in AI and in other fields at risk of getting automated away. I think having just expertise in AI gives you a better chance than having neither. I don’t know who Yang’s “smartest people” actually were, but if they aren’t people with either specific AI expertise, or specific expertise in areas likely to fall victim to automation, or maybe expertise in labor economics, then I think their pronouncements about how many Americans are going to lose their jobs to automation in the near future are themselves examples of the phenomenon that xkcd comic is pointing at.
(Also, e.g., truck drivers may well have characteristic biases when they talk about the likely state of the truck driving industry a decade from now, just as AI researchers may.)