You make a quick statement at the end about how Kurzweil does better than random chance. But I wonder how we’d assess that? I’d guess that, if he’s getting 50% correct or weakly correct, he’s doing better than random chance because many (most?) of his claims are far-fetched.
I’ve thought of a way to test this, although it will take another ten years:
Kurzweil makes a bunch of predictions about what will happen by 2023. Then you have a bunch of non-experts decide which of his predictions they agree with. After 10 years, we can measure how much better Kurzweil did than the non-experts.
I think I can do this! I read “The Age of Spiritual Machines” when it came out, and remember marking in the margins about whether or not I agreed with each. I was in high school at the time, and think I left the book at home when I left for college. I will see if it is still there.
Though I also agree with the comment from handofixue that making the predictions is much harder than judging them.
In fairness, it would seem that simply coming up with the prediction is probably a lot of the work.
As a metaphor: it’s relatively easy to walk non-experts through a proof of Godel’s Incompleteness Theorem. The hard part is often coming up with the idea in the first place, or proving it’s correctness; simply agreeing on a proof or theory is vastly easier :)
Some of the predictions are affected by this more than others, but it’s hard to judge in any case. For example, the “nanotechnology is prevalent” hypothesis wouldn’t be that hard to locate, given that a lot of people were talking about nanotechnology at the time. Then it’s just a matter of deciding yes or no based on evidence and your model(s). On the other hand, something like his prediction that “Personal computers with high resolution interface embedded in clothing and jewelry, networked in Body LAN’s,” while wrong, is much harder to locate in the hypothesis space.
You make a quick statement at the end about how Kurzweil does better than random chance. But I wonder how we’d assess that? I’d guess that, if he’s getting 50% correct or weakly correct, he’s doing better than random chance because many (most?) of his claims are far-fetched.
I’ve thought of a way to test this, although it will take another ten years:
Kurzweil makes a bunch of predictions about what will happen by 2023. Then you have a bunch of non-experts decide which of his predictions they agree with. After 10 years, we can measure how much better Kurzweil did than the non-experts.
I think I can do this! I read “The Age of Spiritual Machines” when it came out, and remember marking in the margins about whether or not I agreed with each. I was in high school at the time, and think I left the book at home when I left for college. I will see if it is still there.
Though I also agree with the comment from handofixue that making the predictions is much harder than judging them.
Very cool! I’d love to see that. What year did you do this?
In fairness, it would seem that simply coming up with the prediction is probably a lot of the work.
As a metaphor: it’s relatively easy to walk non-experts through a proof of Godel’s Incompleteness Theorem. The hard part is often coming up with the idea in the first place, or proving it’s correctness; simply agreeing on a proof or theory is vastly easier :)
For anyone who hasn’t read it, see locating the hypothesis
Some of the predictions are affected by this more than others, but it’s hard to judge in any case. For example, the “nanotechnology is prevalent” hypothesis wouldn’t be that hard to locate, given that a lot of people were talking about nanotechnology at the time. Then it’s just a matter of deciding yes or no based on evidence and your model(s). On the other hand, something like his prediction that “Personal computers with high resolution interface embedded in clothing and jewelry, networked in Body LAN’s,” while wrong, is much harder to locate in the hypothesis space.