That might work if we had major track records for people. Unfortunately for a lot of issues that could potentially matter (say the Singularity and Cryonics) we won’t have a good idea who was correct for some time. It seems like a better idea to become an expert on a few issues and then see how much a given expert agrees with you in the area of your expertise. If they agree with you, you should be more likely to give credence to them in their claimed areas of expertise.
Well, I would like to see more short term predictions on TakeOnIt, where after the event in question, comments are closed, and what really happened is recorded. From this data, we would extrapolate who to believe about the long term predictions.
That might work in some limited fields (economics and technological developement being obvious ones). Unfortunately, many experts don’t make short term predictions. In order for this to work one would need to get experts to agree to try to make those predictions. And they have a direct incentive not to do so since it can be used against them later (well up to a point. Psychics like Sylvia Brown make repeated wrong predictions and their followers don’t seem to mind). I give Ray Kurzweil a lot of credit for having the courage to make many relatively short term predictions (many which so far have turned out to be wrong but that’s a separate issue).
Yes, in some cases, there is no (after the fact) non-controversial set of issues to use to determine how effective an expert is. Which means that I can’t convince the general public of how much they should trust the expert, but I can still figure out how much I should trust em by looking at their positions that I can evaluate.
There is also the possibility of saying something about such an expert based on correlations with experts whose predictions can be non-controversially evaluated.
That might work if we had major track records for people. Unfortunately for a lot of issues that could potentially matter (say the Singularity and Cryonics) we won’t have a good idea who was correct for some time. It seems like a better idea to become an expert on a few issues and then see how much a given expert agrees with you in the area of your expertise. If they agree with you, you should be more likely to give credence to them in their claimed areas of expertise.
Well, I would like to see more short term predictions on TakeOnIt, where after the event in question, comments are closed, and what really happened is recorded. From this data, we would extrapolate who to believe about the long term predictions.
That might work in some limited fields (economics and technological developement being obvious ones). Unfortunately, many experts don’t make short term predictions. In order for this to work one would need to get experts to agree to try to make those predictions. And they have a direct incentive not to do so since it can be used against them later (well up to a point. Psychics like Sylvia Brown make repeated wrong predictions and their followers don’t seem to mind). I give Ray Kurzweil a lot of credit for having the courage to make many relatively short term predictions (many which so far have turned out to be wrong but that’s a separate issue).
Yes, in some cases, there is no (after the fact) non-controversial set of issues to use to determine how effective an expert is. Which means that I can’t convince the general public of how much they should trust the expert, but I can still figure out how much I should trust em by looking at their positions that I can evaluate.
There is also the possibility of saying something about such an expert based on correlations with experts whose predictions can be non-controversially evaluated.