James Shanteau found in “Competence in Experts: The Role of Task Characteristics”...
Good to see that paper being given an airing. But one important thing that must be done is to decompose the problems we’re working on: some results may be more solid that others. I’ve shown that using expert opinion to establish AI timelines is nearly worthless. However you can still get some results about the properties of AIs (see for instance Omahundro’s AI-drives paper), and these are far more solid (for one, they depend much more on arguments than on expertise). So we’re in the situation of having no clue when and how AIs could emerge, but being fairly confident that there’s a high risk if they do.
Compare for instance the economics of the iPhone. We failed to predict the iPhone ahead of time (continually predicting that these kinds of thing were just around the corner or in the far future), but the iPhone didn’t escape the laws of economics and copying and competition. We can often say something about things, even if we must fail to say everything.
Good to see that paper being given an airing. But one important thing that must be done is to decompose the problems we’re working on: some results may be more solid that others. I’ve shown that using expert opinion to establish AI timelines is nearly worthless. However you can still get some results about the properties of AIs (see for instance Omahundro’s AI-drives paper), and these are far more solid (for one, they depend much more on arguments than on expertise). So we’re in the situation of having no clue when and how AIs could emerge, but being fairly confident that there’s a high risk if they do.
Compare for instance the economics of the iPhone. We failed to predict the iPhone ahead of time (continually predicting that these kinds of thing were just around the corner or in the far future), but the iPhone didn’t escape the laws of economics and copying and competition. We can often say something about things, even if we must fail to say everything.