Damn! If only I’d listened to AI researchers five years ago.
(I know what you meant :-).)
Yes, it’s true that AI researchers’ greater expertise is to some extent counterbalanced by possible biases.I still think it’s likely that a typical eminent AI researcher has a better idea of the likely level of technological obsolescence over the next ~decade than a typical randomly chosen person with (say) an IQ over 160.
(I don’t think corresponding things are always true. For instance, I am not at all convinced that a randomly chosen eminent philosopher-of-religion has a better idea on average of whether there are any gods and what they’re like if so than a randomly chosen very clever person. I think it depends on how much real expertise is possible in a given field. In AI there’s quite a lot.)
I agree that people who are both AI experts and truck drivers (or executives at truck-driving companies) will have a better idea of how many Americans will lose their truck-driving jobs because they get automated away, and likewise for other jobs.
Relatively few people are expert both in AI and in other fields at risk of getting automated away. I think having just expertise in AI gives you a better chance than having neither. I don’t know who Yang’s “smartest people” actually were, but if they aren’t people with either specific AI expertise, or specific expertise in areas likely to fall victim to automation, or maybe expertise in labor economics, then I think their pronouncements about how many Americans are going to lose their jobs to automation in the near future are themselves examples of the phenomenon that xkcd comic is pointing at.
(Also, e.g., truck drivers may well have characteristic biases when they talk about the likely state of the truck driving industry a decade from now, just as AI researchers may.)
Damn! If only I’d listened to AI researchers five years ago.
(I know what you meant :-).)
Yes, it’s true that AI researchers’ greater expertise is to some extent counterbalanced by possible biases.I still think it’s likely that a typical eminent AI researcher has a better idea of the likely level of technological obsolescence over the next ~decade than a typical randomly chosen person with (say) an IQ over 160.
(I don’t think corresponding things are always true. For instance, I am not at all convinced that a randomly chosen eminent philosopher-of-religion has a better idea on average of whether there are any gods and what they’re like if so than a randomly chosen very clever person. I think it depends on how much real expertise is possible in a given field. In AI there’s quite a lot.)
Knowing whether AI will make a field obsolate takes both expertise of AI and expertise of the given field.
There’s an xkcd for that https://xkcd.com/793/
I agree that people who are both AI experts and truck drivers (or executives at truck-driving companies) will have a better idea of how many Americans will lose their truck-driving jobs because they get automated away, and likewise for other jobs.
Relatively few people are expert both in AI and in other fields at risk of getting automated away. I think having just expertise in AI gives you a better chance than having neither. I don’t know who Yang’s “smartest people” actually were, but if they aren’t people with either specific AI expertise, or specific expertise in areas likely to fall victim to automation, or maybe expertise in labor economics, then I think their pronouncements about how many Americans are going to lose their jobs to automation in the near future are themselves examples of the phenomenon that xkcd comic is pointing at.
(Also, e.g., truck drivers may well have characteristic biases when they talk about the likely state of the truck driving industry a decade from now, just as AI researchers may.)