Futuristic Predictions as Consumable Goods
The Wikipedia entry on Friedman Units tracks over 30 different cases between 2003 and 2007 in which someone labeled the “next six months” as the “critical period in Iraq”. Apparently one of the worst offenders is journalist Thomas Friedman after whom the unit was named (8 different predictions in 4 years). In similar news, some of my colleagues in Artificial Intelligence (you know who you are) have been predicting the spectacular success of their projects in “3-5 years” for as long as I’ve known them, that is, since at least 2000.
Why do futurists make the same mistaken predictions over and over? The same reason politicians abandon campaign promises and switch principles as expediency demands. Predictions, like promises, are sold today and consumed today. They produce a few chewy bites of delicious optimism or delicious horror, and then they’re gone. If the tastiest prediction is allegedly about a time interval “3-5 years in the future” (for AI projects) or “6 months in the future” (for Iraq), then futurists will produce tasty predictions of that kind. They have no reason to change the formulation any more than Hershey has to change the composition of its chocolate bars. People won’t remember the prediction in 6 months or 3-5 years, any more than chocolate sits around in your stomach for a year and keeps you full.
The futurists probably aren’t even doing it deliberately; they themselves have long since digested their own predictions. Can you remember what you had for breakfast on April 9th, 2006? I bet you can’t, and I bet you also can’t remember what you predicted for “one year from now”.
- A question about Eliezer by 19 Apr 2012 17:27 UTC; 55 points) (
- The Pervasive Illusion of Seeing the Complete World by 9 Feb 2023 6:47 UTC; 38 points) (
- Disappointment in the Future by 1 Dec 2008 4:45 UTC; 16 points) (
- 4 Jan 2011 16:25 UTC; 15 points) 's comment on New Year’s Predictions Thread (2011) by (
- Wrong Tomorrow by 2 Apr 2009 8:18 UTC; 10 points) (
- [SEQ RERUN] Futuristic Predictions as Consumable Goods by 27 May 2011 15:57 UTC; 7 points) (
- 5 Jan 2011 15:42 UTC; 4 points) 's comment on New Year’s Predictions Thread (2011) by (
I’ve been thinking about this problem a bit. I think that every futurist paper should include a section where it lists, clearly, exactly what counts as a failure for this prediction. In fact, that would be the most important piece of the paper to read, and those with the most stringent (and short term) criteria for failure should be rewarded.
And, in every new paper, the author should list past failure, along with a brief sketch of why the errors of the past no longer apply here. This is for the authors themselves as much as for the readers—they need to improve and calibrate their predictions. Maybe we could insist that new papers on a certain subject are not allowed unless past errors in that subject are addressed?
Of course, to make this all honest and ensure that errors aren’t concealed or minimized, we should ensure that people are never punished for past errors, only for a failure to improve.
Now, if only we could extend such a system to journalists as well… :-)
I think that every futurist paper should include a section where it lists, clearly, exactly what counts as a failure for this prediction. In fact, that would be the most important piece of the paper to read, and those with the most stringent (and short term) criteria for failure should be rewarded.
Not only that, but that section should also include a monetary deposit that the author forfeits if his predictions turn out to be false. This would allow the readers to see how much belief the author himself has in his theories.
There could even be some centralized service that keeps track of these predictions and deposits and their payments, perhaps allowing people to browse this list ranked and sorted on various criteria.
Not only that, but that section should also include a monetary deposit that the author forfeits if his predictions turn out to be false. This would allow the readers to see how much belief the author himself has in his theories.
Of course, if one predicts something to happen a relatively long time from now, this might not work because the deposit effectively feels lost (hyperbolic discounting). For instance, I wrote an essay speculating on true AI within 50 years: regardless of how confident I am of the essay’s premises and logical chains, I wouldn’t deposit any major sums to it, simply because “I’ll get it back in 50 years” is far enough in the future to feel equivalent to “I’ll never get it back”. I have more use for that money now. (Not to mention that inflation would eat pretty heavily on the sum, unless an interest of some sort was paid.)
Were we talking about predictions made for considerably shorter time scales, then deposits would probably work better, but I still have a gut feeling that any deposits made on predictions with a time scale of several years would be much lower than was to be expected from the futurists’ actual certainty of opinion. (Not to mention that the deposits would vary based on the personal income level of each futurist, making accurate comparisons harder.)
http://www.saunalahti.fi/~tspro1/artificial.html gives me an access-forbidden error.
It’s at http://www.xuenay.net/artificial.html now; however, at this point in time I find it to be mediocre at best.
Say, Kaj, where’d you get that “50 years” figure from?
Stuart, and Ilkka, how about you guys go first, with your next paper? It is easy to say what other people should do in their papers.
Eliezer, good question. Now that I think of it, I realize that my AI article may have been a bit of a bad example to use here—after all, it’s not predicting AI within 50 years as such, but just making the case that the probability for it happening within 50 years is nontrivial. I’m not sure of what the “get the deposit back” condition on such a prediction would be...
...but I digress. To answer your question: IBM was estimating that they’d finish building their full-scale simulation of the human brain in 10-15 years. Having a simulation where parts of a brain can be selectively turned on or off at will or fed arbitrary sense input would seem very useful in the study of intelligence. Other projections I’ve seen (but which I now realize I never cited in the actual article) place the development of molecular nanotech within 20 years or so. That’d seem to allow direct uploading of minds, which again would help considerably in the study of the underlying principles of intelligence. I tacked 30 years on that to be conservative—I don’t know how long it takes before people learn to really milk those simulations for everything they’re worth, but modern brain imaging techniques were developed about 15 years ago and are slowly starting to produce some pretty impressive results. 30 years seemed like an okay guess, assuming that the two were comparable and that the development of technology would continue to accelerate. (Then there’s nanotech giving enough computing power to run immense evolutionary simulations and other brute-force methods of achieving AI, but I don’t really know enough about that to estimate its impact.)
So basically the 50 years was “projections made by other people estimate really promising stuff within 20 years, then to be conservative I’ll tack on as much extra time as possible without losing the point of the article entirely”. ‘Within 50 years or so’ seemed to still put AI within the lifetimes of enough people (or their children) that it might convince them to give the issue some thought.
I just happened to read a clever speech by Michael Crichton on this topic today. I think his main point echoes yours (or yours his).
http://www.crichton-official.com/speeches/speeches_quote07.html
Working link: http://web.archive.org/web/20070411012839/http://www.crichton-official.com/speeches/speeches_quote07.html
Nice speech (although I disagree with the general discounting of all value for predictions); Crichton reminds me a lot of Scott Adams—he says a lot of insightful things, but occasionally also says something that drives me nuts.
I also liked this (even though such people are fish in a barrel):
And one of the teethgrinders:
A little one-sided, me thinks: http://en.wikipedia.org/wiki/Antarctica#Ice_mass_and_global_sea_level
Not only that, but that section should also include a monetary deposit that the author forfeits if his predictions turn out to be false.
That I strongly disagree with. We don’t want to discourage people from taking risks, we want them to improve with time. If there’s money involved, then people will be far shyer about the rigour of the “failure section”.
Ideally, we want people to take the most pride in saying “I was wrong before, now I’m better.”
Stuart, and Ilkka, how about you guys go first, with your next paper? It is easy to say what other people should do in their papers.
Alas, not much call for that in mathematics—the failure section would be two lines: “if I made a math mistake in this paper, my results are wrong. If not, then not.”
However, I am planning to write other papers where this would be relevant (next year, or even this one, hopefully). And I solemly swear in the sight of Blog and in the presence of this blogregation, that when I do so, I will include a failure section.
And the people here are invited to brutally skewer or mock me if I don’t do so.
Fine print at the end of the contract: Joint papers with others are excluded if my co-writer really objects.
Did you?
I did, in a paper that was rejected. The subsequent papers were not relevant (maths and biochemistry). But I will try and include this in the Oracle AI paper when it comes out.
And you didn’t resubmit it to other journals?
It was rambling and obsolete :-)
Rewritting it was more trouble than it was worth; you can find it at www.neweuropeancentury.org/GodAI.pdf if you want.
Alas, not much call for that in mathematics—the failure section would be two lines: “if I made a math mistake in this paper, my results are wrong. If not, then not.”
Actually, the failure section would be: “If my results are wrong, I made a math mistake in this paper. If I made no mistake in this paper, my results are correct.”
IBM was estimating that they’d finish building their full-scale simulation of the human brain in 10-15 years. Having a simulation where parts of a brain can be selectively turned on or off at will or fed arbitrary sense input would seem very useful in the study of intelligence. Other projections I’ve seen (but which I now realize I never cited in the actual article) place the development of molecular nanotech within 20 years or so.
Then you could make an interim prediction on the speed of these developments. If IBM are predicting a simulation of the human brain in 10-15 years, what would have to be true in 5 years if this is on track?
Same thing for nanotechnology—if those projections are right, what sort of capacities would we have in 10 years time?
But I completely agree with you about the unwisdom of using cash to back up these predictions. Since futurology speculations are more likely to be wrong than correct (because prediction is so hard, especially about the future) improving people’s prediction skills is much more usefull than punishing failure.
Alas, not much call for that in mathematics—the failure section would be two lines: “if I made a math mistake in this paper, my results are wrong. If not, then not.”
Actually, the failure section would be: “If my results are wrong, I made a math mistake in this paper. If I made no mistake in this paper, my results are correct.”
Indeed! :-) But I was taking “my results” to mean “the claim that I have proved the results of this paper.” Mea Culpa—very sloppy use of language.
I’m surprised that nobody in this comment thread mentioned fusion power.