New Year’s Predictions Thread (2011)
As we did last year, use this thread to make predictions for the next year and next decade, with probabilities attached when practical.
Happy New Year, Less Wrong!
- New Year’s Prediction Thread (2012) by 1 Jan 2012 9:35 UTC; 27 points) (
- 1 Jan 2013 23:20 UTC; 13 points) 's comment on New Year’s Prediction Thread (2013) by (
- 15 May 2012 15:33 UTC; 6 points) 's comment on How many people here agree with Holden? [Actually, who agrees with Holden?] by (
Shouldn’t we also review how accurate last year’s predictions were?
Why? We already consumed those delicious predictions.
To evaluate calibration.
’Twas sarcasm, friend. http://lesswrong.com/lw/hi/futuristic_predictions_as_consumable_goods/
Ah—I hadn’t read that post yet, so I missed the reference. Thanks!
I checked- most people (including me) predicted for the decade rather than the year. In hindsight, that seems to be all about Far Mode signaling, so I made sure to do a 2011 prediction.
There were some obviously false ones—predictions of a terrorist attack in the US, eg. (There were attempts of varying levels of entrapment, but I didn’t find any actually successful ones.)
In my opinion, mattnewport’s 50% odds weren’t that badly calibrated (especially given the Times Square attempt).
For me, the Times Square attempt falls straight into the ‘self-radicalized incompetents’ category akin to the British airport attack or the JFK plot; it makes perfect sense to me that ‘real’ attacks will be rare and with minimal casaulties (I have argued that Terrorism is not about Terror and Terrorism is not Effective in the past, and now that I think about, supposedly there’s a power-law governing terrorist attacks).
As far as credible attacks go, I guessed from going through Wikipedia’s categories of terrorist attacks by year that a better prior would be more like 1⁄3, not 1⁄2.
But when I say ‘obviously false’ I don’t mean that it’s an insane prediction (if you read through the thread you’ll see I do call out some predictors for making predictions which I consider ‘on crack’ - not naming any names here, but none of his seem any more likely to be true after the passage of a year), I just mean that it’s objectively and clearly judgeable as wrong now that the year is out.
This is a rare and valuable property, as one will learn after reading through a few compilations of predictions for 2011.
Thanks for the clarification; my first reading was that you were holding people to the wrong standard. If you made 10 predictions at 70% each, and 6 of them come true, then you should be lauded rather than criticized. If all 10 of them come true (and appear to be causally independent of each other), then you should be criticized for underconfidence.
/looks at http://predictionbook.com/users/gwern
: (
I’m guessing that many of your 80% and 90% predictions would be positively correlated with one or two major trends (e.g. there was no global economic meltdown in 2010), so it’s not quite that simple. Looks like pretty good calibration to me overall.
The 70-90% bracket for me is ~70 predictions judged; I don’t think most or even many of them are economic-related (except in a weak sense).
If you wanted to check my intuition, there doesn’t seem to be any easy way to filter my user page for just the judged ones within a probability range short of downloading and processing the HTML, but you could look through a few dozen or score of the recently judged predictions (http://predictionbook.com/predictions/judged) since I have looked at every prediction on the site and registered my own probabilities for essentially every prediction which isn’t either sports or highly personal (and even then I’ve frequently given it a shot anyway).
Wow, really? I’ll happily make the same bet again for 2011 if you’ll offer it again—though I’d like to slightly tighten the rules to say it has to be a conspiracy, because I don’t think something like Fort Hood should count.
Oh, my own odds are lower than mattnewport’s, just not low enough to call his estimate crazy.
Also, my bid-ask spread is pretty high if we’re talking actual bets. I’d take the “yes” side at something like 10:1 odds, and the “no” side at 2:1 odds the other way. And I’d only feel morally comfortable betting the “yes” side in the context of a formal prediction market, where the positive externalities to having accurate odds would assuage my guilt about collecting in that case.
Don’t ask, do! :)
The unemployment rate in the United States will continue to be above 8%: 90%
“Core inflation” of the U.S. dollar (which ignores food and energy prices) shall remain below 2.0%: 80%
The fifth book in the “A Song of Ice and Fire” series will be published: 5%
A superintelligent AGI will be created: Less than 1 in 1 million
The Large Hadron Collider will destroy the world: Less than 1 in 1 million
My 96-year-old grandmother survives another year: 67%
The Riemann hypothesis is proven: 1 in 5000
I qualify for the Magic Pro Tour: 1%
I get a “real job”: 1%
Agreed. And if we’re proven wrong, I promise to buy everybody on this list who publicly disagrees a beer, or else an unreasonably expensive mineral water as a means of signaling the equivalent level of deference.
It might be worth noting that even if the LHC does do something that causes the world to be destroyed, it might be a rather slow destruction that could take a few years to complete, so that’s not entirely an empty promise...
Damn! I had counted on it being completely empty. Well, no backing out of it now. My offer stands!
Don’t worry, I’ll chip in. ;)
Dead wrong on this one. She died a few hours ago.
*hugs*
I was going to ask why you gave death such a low probability, but when I went to look at what the mortality rates actually said for a 96 year old female, I found that many don’t even go that high: http://www.deathriskrankings.com/MortStats.aspx
Huh.
D: I’m sorry to hear that. Are you doing ok?
Yeah, so far. She had been mostly gone for quite a while, so...
This one and Alzheimer’s greatly bother me because they are cryonics-resistant.
1 in 5000 is too small for the Riemann hypothesis, given that people have only been seriously working on it for 130 or so years, and that there still exist very smart people who are optimistic about one approach or another. (I know some of them, in fact.) It’s not like P vs. NP, where the experts in the field are agreed that there’s almost surely a long way to go for any approach.
I’d put it at 1% to 2.5%, myself.
I assume these all are for 2011 given the year time-frame for the grandmother:
unemployment: http://predictionbook.com/predictions/2125
inflation: http://predictionbook.com/predictions/2126
GRRM: http://predictionbook.com/predictions/2127
AGI: effective dupe of http://predictionbook.com/predictions/2092
LHC: effective dupe of http://predictionbook.com/predictions/1432
Grandmother prediction is moot
Riemann: effective dupe of http://predictionbook.com/predictions/2094
Magic: http://predictionbook.com/predictions/2128
job: http://predictionbook.com/predictions/2129
This sounds about right. GRRM expires before finishing the series: 30%.
http://predictionbook.com/predictions/2130
China will remain on-schedule with their build-up of nuclear plants (70%) and will announce that they are once again increasing their goal for nuclear generation capacity by 2020 above their current target of 112 GWe (50%). The mainstream position of major environmentalist groups in Europe and North America will continue to be that nuclear plants are always delayed and far over budget, even as China continues to demonstrate otherwise (90%).
Cloud computing will see two big trends. First, commoditization: there will be more choices, and significantly better interoperability between those choices (80%). Second, Amazon Web Services will continue to roll out interesting new services, and everybody else in the market will either be playing catch-up or trying to focus on different niches, but not leading the market in any significant way (70%).
Iran will not produce a nuclear bomb in 2011 (90%). North Korea will continue saber rattling, but will not be involved in a war (80%).
There will be a second season of High School of the Dead, allowing me to get my zombie action fix for the year (60%). I know this is crushingly inconsequential, but zombies are fun.
SpaceX will successfully complete orbit-matching and docking with the International Space Station (60%), as well as the first flight of their Falcon 1e rocket (70%). Conditioning on success in the ISS docking, the first two resupply missions will succeed (80%). All within 2011.
ONE YEAR LATER: I am too damn optimistic. The highly publicized accident at Fukushima caused China to delay their nuclear build-up. My cloud computing predictions were accurate, but also super conservative, and so were the ones about Iran and North Korea. There wasn’t a second season of Highschool of the Dead; just a fairly disappointing one-episode OVA. The Falcon 1e looks delayed, and the first docking of the Falcon 9 with the Space Station is scheduled for early 2012, which is nice, but still not 2011, so that prediction didn’t quite come true.
Iran: http://predictionbook.com/predictions/2032
What is saber-rattling in a Korean context? Does it involve no fatalities? Then it overlaps with http://predictionbook.com/predictions/2097
HSotD: http://predictionbook.com/predictions/2104 (BTW, you know there’s an OVA slated for 2011, right? http://en.wikipedia.org/wiki/Highschool_of_the_Dead#Anime )
SpaceX/ISS: http://predictionbook.com/predictions/2105
Falcon maiden flight: http://predictionbook.com/predictions/2106
Resupply: http://predictionbook.com/predictions/2107
Is this the conjunction fallacy here? I read this as P (China on schedule) = 70% but P (enviros dis nukes & China on schedule) = 90%.
Do you mean P (enviros dis nukes | China on schedule) = 90%, or P (enviros dis nukes) = 90% with the reference to China a rhetorical flourish?
Looks like I could have phrased that better. My 70% prediction was that China would remain overall on schedule. My “China continues to demonstrate otherwise” statement only requires that a significant number of China’s nuke plants continue to be on schedule and under budget, which requires much less conjunction of events. For example, if huge setbacks arise for the construction of AP1000 reactors, but China’s CPR1000 reactors continue to be constructed cheaply and smoothly, then this could invalidate my first prediction but not the second.
That’s a nice thing about China’s approach to nuclear build-up: they’re going with a bunch of different designs and approaches, from conventional gigawatt behemoths to Russian fast breeders to indigenously-designed pebble beds. If something doesn’t pan out, they may not meet their goals, but they have a high probability of one or more approach working smoothly. So far, it looks like everything is going according to plan.
OK, thanks.
Wow, you think the North Korea situation’s that bad? A one in five chance of war seems too high for me.
In retrospect, you’re right. Bump up the no-war-in-NK probability. It’s funny how assigning a lower probability to X feels cautious, but assigning a higher probability to not-X feels risky, even though it’s just two ways of saying the same damn thing. I think there’s a top-level post in this somewhere.
For even more recent predictions, y’all can check out http://predictionbook.com/predictions
(I’ve recently added ~90 based on various lists of predictions sparked by the new year, many of which are interesting, IMO; this is quite aside from all the predictions on this page.)
As noted by Keynes, markets can remain irrational a lot longer than you and I can remain solvent. However, I’d say about 80% for each of the following:
The price of gold will fall in real terms. [ETA one year on—Wrong. Fell a bit towards the end of the year, but still a net rise.]
The price of U.S. housing will fall in real terms. [ETA one year on—numbers still seem to be being crunched, or at least I can’t find anything very recent I’d regard as reliable.]
The rate of inflation of U.S. dollars will rise. [ETA one year on—I think I got this one right. Subject to updating with better data in the near future.]
There will be much more discussion in the general media of a “higher education bubble.” Not sure how to measure this.
@JoshuaZ, the number of servicemembers leaving due to DADT repeal will not be measured. The goverment won’t ask, and won’t tell. [ETA one year on—I think I got this one right, for what it’s worth. As I understand it, the U.S. military is on a path toward shrinking in general anyway.]
And finally—not necessarily a prediction for this year—things that can’t go on forever won’t. 100%
Yes, but others will be trying to measure this, including some aspects of the right-wing in the US.
I’ll go a bit further and predict that, because of how politicized the issue is, there will not be any mainstream agreement on whose unofficial figures are more accurate.
Gold: http://predictionbook.com/predictions/2100
Housing: http://predictionbook.com/predictions/2101
Inflation: http://predictionbook.com/predictions/2102
Skipping higher-ed bubble and no-measurement because I don’t know any good way to operationalize them (Google hits seems like a terrible one, even if we could get replicable data from them about the rate)
I disagree, for a narrow definition of “general media,” and agree for a broad definition of it. It’s been an open secret for quite some time that college has a lot of flaws and is negative value for wide swaths of the population, and I imagine that will catch on in many circles. But I really don’t see that breaking into the mainstream media, since the value of college is artificially high in most of their narratives (especially since the journalists who are working for the MSM will be people who need to double down on their belief that their degree was worth it).
I should have thought of this before: I just googled “higher education bubble” in quotes, and got ‘About 41,000 results’. A search in google news gives five results. A year from now, that should be higher. Not a very good yardstick, but at least it’s checkable.
You’ll need the relative frequency of the phrase (per running word of news article text), not the absolute count of it. In the extreme, if the “from” date of the google news search doesn’t advance, you can only expect the number to increase :) Even if there’s a fixed time window, you can also expect more words to be generated in the next year than in the past year.
For what it’s worth, my latest search for the same phrase now results in ‘About 1,860,000 results.’
Update on housing prediction?
Checking CronoDAS’s prediction, the core rose to something like 2.15%, which was definitely above what it was last time, so I marked it right.
I’d mention that it’s much better to reply to your predictions a year on than to edit them, but then I remembered who I’m talking to.
Was that another prediction, or something the government has already confirmed?
That would be a prediction, specifically applied to what the federal government will do or not do during 2011.
With that said, I’d say human motivations are complicated. I could foresee, for instance the following motivations for leaving the military:
Enlisted infantryman:
[Subjective motivation]: I have failed ranger school in an embarrassing fashion.
[Declared motivation for leaving the army]: A bunch of fags are running the show now.
Infantry officer:
[Subjective motivation]: A bunch of fags are running the show now.
[Declared motivation]: I now wish to serve my country by pursuing public office.
I took it as a joke.
Why do you think the price of US housing will fall in real terms (even not counting inflation)?
Basically, I’m persuaded by the arguments that housing prices are out of line with a historical trend and out of line with incomes and prevailing rents.
Now that you mention it, and upon reflection, I must admit I am biased by some vague, aesthetic sentiments as well. My subjective impression is that the bigger=better approach to housing in America should become less fashionable rather than more. I’d like to think of this as an “intuition,” but maybe it’s just a “bias.” Anyway, we’ll get some evidence as the new year goes on.
Are you going to therefore revise your estimate downward?
That’s a good question. If this were a disinterested Bayesian search for the truth, then of course. But I take this particular game as a simulated bet, albeit for no monetary stakes. Before I post anything, I have to search inside myself for the most rational course; but once I place my bet, I’m stuck with it. No fair changing it. Reality will determine whether I win or lose, come the end of 2011.
Also, in this real world of ours, I think inarticulable intuitions may have a rational basis, in the same way that a skilled basketball player can make a free throw without knowing what the hell a parabola is. With that said, I should say I don’t yet know whether my predisposition was a “bias” or a “brilliant intuition.”
On the other hand, this could make for good, low-stress practice at updating estimates based on new evidence/deductions. You probably should keep your original estimate in its place (since that’s what people’s replies are predicated on), but could you say here what you’d now estimate that probability to be?
Another good question. Upon further reflection, I’d have to admit that my thoughts in this area are....what’s the opposite of rigorous? Fuzzy, vague, unconscious? Someone who is very experienced and skilled in an area that requires intangible intuition or a “feel” for something might have very good judgment without being able to articulate the bases for that judgment. But I’m not experienced or skilled in the housing market. For that reason, I think I should be very suspicious of my own subjective intuitions in this area. They are breeding grounds for unknown bias. So I should revise my estimate downward. I must also admit that any pretense of a specific percentage estimate is far more precise than I am able to give. Right now I’m thinking of a range rather than a point, something like sixty to eighty percent.
P.S. Goddammit, I had to think about this some more and must clarify. My confession: I had to revise specifically downward because I think my subjective emotional biases would tend to make me wish that the housing market came down. On a conscious level, I made my first estimate in good faith. But now I am a bit more wary of what I was then thinking.
Fucking rationality. Okay, I’ve got stuff to do, so I’m going to forget this topic entirely for a bit.
General AI will not be made in 2011. Confidence: 90%.
The removal of DADT by the US military will result in fewer than 300 soldiers leaving the military in protest. (Note that this may be hard to measure.) Confidence: 95%.
The Riemann Hypothesis will not be proven. Confidence: 75%. (Minor note for the nitpickers: relevant foundational system is ZFC.)
Ryan Williams recent bound on ACC circuits of NEXP ((See here for a discussion of Williams work)) will be tightened in at least one of three ways: The result will be shown to apply for some smaller set of problems than NEXP, the result will be improved for some broader type of circuit than ACC, or the bound on the circuit size ruled out will be improved. Confidence: 60%
At least one head pastor of a Protestant megachurch in the US will be found to be engaging in homosexual activity. For purposes of this prediction “megachurch” means a church with regular attendance of 3000 people at Sunday services. Confidence: 70%.
Clashes between North Korea and South Korea will result in fatalities: Confidence 80%.
I hope to hell you’re underconfident about that.
Would you classify MC-AIXI as a General AI?
Given Vast quantities of computing power, it would qualify as a very silly AGI which will eventually try dropping an anvil on its own head just to see what happens.
Roughly, no.
Nice failsafe if it unexpectedly goes “foom” though.
No.
I’m definitely willing to put $300 to $100 on Riemann not being proved—even knowing that you may have some relevant insider knowledge.
I guess one issue is that we couldn’t expect to know for sure by this time next year if it had been proved or not—I’d be willing to go with something like “by January 1 2012, no preprint will exist of an essentially correct proof of the Riemann Hypothesis (in ZFC)”.
Yeah, multiple people have called me out on RH, and thinking about it more, I’m probably was just being way too underconfident.
GAI: http://predictionbook.com/predictions/2092
DADT: http://predictionbook.com/predictions/2093 (assuming year timescale)
Riemann: http://predictionbook.com/predictions/2094
Ryan Williams: http://predictionbook.com/predictions/2095 (I, uh, hope you’ll be judging that one; I don’t follow complexity theory work as closely as you obviously do.)
Sex scandal: http://predictionbook.com/predictions/2096
Korea: http://predictionbook.com/predictions/2097 (when judging I assume any unilateral attack counts even if the other side doesn’t retaliate, like the Cheonan)
Regarding 4, there are a fair number of people here interested in complexity theory issues so it shouldn’t be that hard to get people to judge that. Also note that I deliberately made the question more precise by listing three of the more plausible ways the result might be extended rather than just that it would be tightened. That helps make the question clear cut (if it were generalized to anything that could be reasonably construed as a tightening of his result I’d bump the probability up but it would be much trickier to judge the result.)
I think your confidence for the Riemann Hypothesis not being proven is way too low. Unless there has been some major recent improvement I’m unaware of, next year doesn’t look too much better than any recent year. In addition to the apparent improbability of being solved in any particular year, I would suspect that a problem of this magnitude would show some more significant cracks before being solved.
If I thought there was anything like a 10% of AGI in the next year, my priorities would be radically different.
A lot of people in this thread have said that I’m way too underconfident about RH, and thinking about that, they are probably right. At this point, there are two “obviously false” statements that we can’t even disprove:
1) There are zeros of the zeta function arbitrarily close to the line Re s =1.
2) A positive fraction of the non-trivial zeros lie off the line (with the zeros ordered in the obvious way by the size of the imaginary part).
We also can’t prove the related Lindelof hypothesis which is an easy consequence of the Riemann hypothesis.
All three of these are things that one would expect to be done before RH is proved, and we don’t seem very close to resolving any of these with the possible exception of statement 2. There’s some reason to believe that 2 might be disproven from further tightening of Hardy and Littlewood type results in the same way that Levinson and Conrey showed that at least two fifths of the zeros lie on the line (I lack the technical knowledge to evaluate the plausibility that Conrey’s type of result can be further tightened, although the fact that his result in the tightest known form has stood for about 20 years suggests that further tightening is very non-trivial.) Note that we can prove the slightly weaker statement that almost all the zeros lie with in epsilon of the 1⁄2 line, which is sort of a hybrid of the negations of 1 and 2, but that result is about a hundred years old (one thing I actually wish I understood more is how if at all that result connects to the Hardy and Littlewood type results. My impression is that they really don’t in any obvious way but I’m not sure.) Analogs of RH have also been proven in many different contexts (including the appropriate analogs for finite fields and for certain p-adic analogs); I don’t know much about most of those, but it seems like those results don’t give obvious hints for how to resolve the original case in any useful fashion.
I guess the real reason for my estimate is that there seem to be so many promising techniques for approaching the problem that I don’t understand much in detail, such as using operator theory, or connecting the location of the zeros to the behavior of some quasicrystals.
I was almost certainly underconfident in my above estimate, and I think everyone was very right to call me out on that, so I won’t be taking any bets on it. I’m not sure how to revise it. Upwards is clearly the correct direction, but thinking about the probability estimate in more details leaves me feeling very confused.
GAI & Riemann look underconfident to me.
How confident are you, that they are underconfident? Each one?
My estimates are 95% Riemann in a year, 97.5% GAI in a year
This looks rather odd with no context quoted! :P
I bet Eliezer really, really, really hopes I’m overconfident about THAT one.
Missing NOTs!
I like this. In other words it’s over 70% that AGI will be invented before 2020.
No. That calculation assumes independent probabilities for each year.
So, it’s not the prediction for the next year only, but for a longer period.
Interesting.
No. One model that could produce that prediction for next year is that one is 89% confident that GAI is impossible, and 90% confident that if possible, it will be discovered next year. Strange beliefs, yes.
But they give 10% chance this year and 11% in the next ten.
Seems underconfident. Shane Legg, who is an AGI researcher, predicts:
Although it really depends on what you mean by “General AI.”
How representative is Legg’s prediction of people in that field? It “feels” very optimistic, but I don’t have the relevant expertise. What do other researchers think?
My page has some graphs:
http://alife.co.uk/essays/how_long_before_superintelligence/
Ben covered this issue in a 2010 paper:
How long until human-level AI? Results from an expert assessment
Anecdotal evidence suggests it assigns way more probability mass to the next few decades (leaving almost none for subsequent development, social collapse, etc) than the vast majority of researchers. Surveys have been conducted among folk strongly selected for belief in AI (e.g. at Ben’s AGI conference linked below, on transhumanist websites, etc) that put a big chunk of probability in that period, but usually not as much as Legg.
Unfortunately, there aren’t yet any representative expert polls, and it would be hard to get an expert class that was expert on neuroscience, AI, and outside factors that could speed up progress (e.g. biotech enhancement). Worse, where folk have expressed skeptical views, they have almost always been just negatives with respect to a particular date, rather than probabilities. It seems fairly likely that the median relevant representative expert would assign a probability over 5% but less than 50% to Legg/Kurzweil timelines, particularly if you factored out mysticism/religion-based skepticism.
EDIT: Here’s another survey.
26 of the contributors to the NSF-backed “Managing nano-bio-info-cogno innovations: converging technologies in society” were surveyed on the date at which various technologies were developed, and the median predictions reported in Appendix 1.
Page 344 gives a median estimate of 2085 for AI functionally equivalent to the human brain:
This is handy as a less selected group than attendees at an Artificial General Intelligence conference, although they’re still folk with a professional interest in futuristic technologies generally.
This is helpful. One question, though:
Does this mean “For any given year, a relevant expert would only assign 1/20-1/2 the probability of FOOM by that year that Legg and Kurzweil do”? If not, what does it mean?
Shane Legg says that there is a 95% probability of human-level AI by 2045. Kurzweil doesn’t give probabilities, but claims high confidence in Turing Test passing AI by 2029 and a slow takeoff Singularity over the next two decades. I would bet that a representative sample of experts would assign less than 50% probability to human-level AI by 2045, but more than 5%.
I was surprised, his recent post didn’t leave me with this impression, and I didn’t remember the past well enough. But apparently this is correct, here’s the post and visualization of the prediction endorsed by Legg.
Cool, thanks.
Why do you draw attention to this question and how did you reach this estimate?
Unless you have knowledge of some change (eg, newspaper interest), this seems easy to estimate by the frequency of occurrence. This is a question about the steady state of the world, not about the future (modulo the not terribly rapid change in the number of megachurches). If you care enough to mention this, I’d expect you to care enough to have have some gut feeling estimate for this. In particular, 70% odds for the first such scandal to reach those who haven’t heard about earlier scandals (if any) is absurd (again, unless you know about some change).
Compare this to your prediction about RH!
This question was based on a combination of the base rate and the increasing number of megachurches.
But these scandals do occur. A naive base line rate puts them at around slightly over one every two years. Prior example scandals include Ted Haggard (2006) and Eddie Long (2010). I picked a rate slightly higher than the expected from the historical base rate primarily since the number of megachurchs has been growing over the last few years. (Note also that I used a stricter definition of megachurch than is often used. 2000 regular congregants is often the dividing line, not 3000).
There’s another reason to expect an increasing rate—it’s harder to keep secrets these days.
Yes.
Also, I strongly suspect that changing attitudes towards homosexuality in the US plays a role, although I don’t have a precise understanding of how I expect that to work.
Speaking roughly: my intuition is that when X is bad, a lot of people who have minor suspicions that someone they trust is doing X are motivated to pursue those suspicions, but when X is really really bad, those same people are instead motivated to not think about their suspicions. And I think the ratio of people who think homosexuality is really really bad to those who merely think it’s bad is decreasing.
Not to mention, of course, that illicit relationships by their nature can’t be kept secret from everyone—the other person in the relationship has to know—and the more acceptable the class of relationship becomes in the broader community the easier it is for the other person to reveal it.
I wonder if itemised bills for rentboys etc have turned up on any megachurch’s accounts. That would make for an amusing, if not very surprising, document leak.
95%: The US Federal Reserve is not audited by the US government.
http://predictionbook.com/predictions/2090
As Ray Kurzweil’s predictions have come up in the comments, here’s his “Predictions Essay” (4.1MB) where he grades himself on them and responds to some critics.
I wish I could upvote this more than once.
Wind power will provide more actual electricity than nuclear power by 2021, 80%, and I’d willing to put actual money behind this if it wasn’t for high transaction costs of such a long term bet.
EDIT: Recent events in Fukushima power plant make me slightly more certain about my prediction. Magnitude of the effect will depend on severity of the failure, but it will undoubtedly make safe renewables like wind look more attractive than nuclear power, decreasing chances of the supposed “nuclear renaissance”.
http://predictionbook.com/predictions/2124
Thank you.
I was about to take the other side because of Chinese expansion of nuclear power, but apparently they’re boosting their wind energy capacity even more (in the next decade, at least). By 2020 they expect 80 GWe of nuclear and 100 GWe of wind.
(Link stolen from last year’s energy predictions, but the article was updated last month.)
ETA: Ah, as sketerpot pointed out last year, the figures for peak wind capacity are misleading, as actual production is 20-30% of peak capacity. (You won’t usually have optimal windspeed.) Nuclear plants run at 90%+ of capacity. So upon second thought, I would take the nuclear side at those odds.
Here’s reasoning behind my bet:
First, Chinese plans for wind power are consistently ahead of schedule, and ridiculously so:
It would be beating a dead horse to mention nuclear power and schedule in the same sentence.
Second, growth in wind power is extremely widespread.
On the other hand nuclear is extremely concentrated. Today US, Japan, and France alone produce more than half of global nuclear electricity—and they’re not expanding much. All nuclear’s hopes are about China, but China has little nuclear power, with very low utilization rates of what it has, and in fact China already produces more electricity from wind than nuclear!
And third, skaterpot’s numbers are totally completely utterly wrong.
Wind capacity factors are rapidly increasing due to technological progress. Just 2005 to 2008 global average capacity factors increased from 19.2% to 24.5%. The problem was never “wind not blowing”—capacity is not installed based on highest wind velocities—the problem was getting adequate power from wider range of wind velocities. The real range is more like 30%-40% range for new wind farms.
Actual nuclear power capacity factor is 78% for 2008 globally. American nuclear gets 90%, but that’s what you get from mature non-expanding nuclear industry, typical nuclear problem is entire plant being shut down for one reason or another, and such problems are concentrated in first decade of plant’s operation. Nuclear will never get 90%+ capacity factors if it’s rapidly expanding.
I’d be quite willing to bet wind overtaking nuclear in nameplate capacity around 2015-2017, but I’d need to double check all figures before putting my money behind this pet.
Assuming both are growing, we’d get capacity factors like 30%:75% - wind higher than now as most of these wind farms would be very recent, and nuclear lower than now as most of these power plants would have typical first-decade issues. This 2.5x growth can easily happen in another five years.
OK, good research. Given that info, I still think 80% is a bit too high, but not outside my bid-ask spread.
In 2011, a new law will be proposed in some jurisdiction which specifically: Restricts cryonics activity. 15%. Protects cryonics patients. 3%.
restrict: http://predictionbook.com/predictions/2108
protect: http://predictionbook.com/predictions/2109
I think even these numbers are a little high, except for the fact that you didn’t limit it by jurisdiction. Cryonics isn’t hot right now, but longevity certainly is. I don’t think there is enough attention on cryonics to justify legislation, but even if there were, the first steps of the legal battle would be court decisions rather than legislation.
Cryonics has recently attracted a small but dedicated opposition who’ve adopted the framing that cryonics is a scam which consumers need protection from. (I won’t link to them, but you can find them in any google search for the word “cryonics”.) The basic issue seems to be that it matches their perception of a Scientology-like cult. They’ve been growing more active, so I wouldn’t put it past them to try to push something through this year.
There was a bill specifically targeting cryonics proposed in 2004 in Arizona. Arguably the Ted Williams event was its cause, so as long as cryonics organizations are more careful to establish clear evidence of consent in celebrity cases, the likelihood of it being repeated in a given year should be relatively low.
Oddly enough the publicity from the Ted Williams case triggered an investigation in Michigan in which they determined that CI was an unlicensed cemetery. While not exactly a new law, it is certainly new legal restriction as the existing law for cemeteries prohibits preparing the body on site.
That’s not to say there aren’t useful features of cemetery law—e.g. there’s a law for making sure that 15% of a cemetery’s monies are kept in perpetual care fund. A law like this would make sense in the context of cryonics facilities if they were considered as a separate sort of entity from cemeteries that is permitted to do things necessary for patient care which have no relevance to a cemetery situation, such as preparing the patient on site.
I didn’t actually realize cryonics was such a hot topic on this site until after I had posted, so I became a little worried that I’d get beaten with the newbie stick for it.
I consider myself a transhumanist (in the sense that I find genetic alteration, computer augmentation, life extension, etc to be desirable goals, not in the sense that I drank the Kurzweil Kool-Aid and think that all this is inevitable or even probable in my lifetime), but I had never really considered cryonics as a major transhumanist approach. I’m certainly not opposed to cryonics on any kind of ethical grounds (my personal pragmatic concerns are a matter for another thread entirely), but since this is a question of the policy rather than the science side of cryonics, I have to go with my general observation that legislatures almost inevitably show up a day late and a dollar short. I think that the first wave of legislation on the topic will come at least one legislative session after the irrational masses start to get worked up into a religious frenzy over cryonics. So this is, to me, an issue better suited for decade rather than year predictions. I am however, compelled to agree with you that the likelihood of pro-cryonics legislation appears to be significantly less than the likelihood of anti-cryonics legislation. Hell, even if I weren’t a transhumanist, the civil libertarian in me would be appalled by Michigan’s facepalmingly bureaucratic handling of the situation. “Cryonics Institute is clearly operating as both a funeral establishment and cemetery without any state oversight.” Do we really need a government permission slip to bury/freeze our dead?
Also, why am I completely unsurprised by the fact that Arizona was the state to try and ban cryonics?
I’d be more than happy to debate any and all pragmatic concerns you can think of in another thread. Feel free to start one in Discussion. I’m not signed up yet, focusing largely on the advocacy side of things. As a younger adult it seems like advocacy has a higher potential payoff both in research getting done before my turn comes and having freedom and necessary infrastructure to get preserved under ideal circumstances. Currently it’s very difficult to arrange an ideal preservation.
I’m not 100% libertarian, and try to see both sides. There is something to the argument that there should be a law requiring cryonics organizations to have good financial arrangements covering long term care. The state has a legitimate interest in preventing the thawing of patients, along similar (though not identical) lines to the interest it has in preventing graveyards from having to sell their land to developers. But that interest is not even remotely close to being an adequate excuse to prevent patients from achieving an ideal preservation. We’re being handed a false dichotomy when forced to regulate cryonics as if it were a cemetery operation (or as a standardly defined medical one, if it comes to that).
10%: the United States has a historically hot summer.
This is only notable because it’s the only factor that could get cap-and-trade legislation through a Republican House. (Whose bright idea was it to have the 2009 Copenhagen climate change summit in winter? The juxtaposed headlines with the winter storm may have set back the legislation indefinitely.)
If you’ll give me 9:1 odds on this summer being, say, the hottest summer in the US in a century, I’ll take it!
I was thinking along the lines of Skatche’s reasoning above. 10% is my break-even point; if you were willing to go against me at 19:1, I’d take it.
I didn’t make myself clear—it’s the other side of the bet I want!
Oh, in that case I’d take the “no” side at 5:1 odds or lower. (I’m metauncertain enough that I wouldn’t dare make bets in either direction close enough to my break-even point.)
At those odds a bet is almost but not quite worth it I think!
OK, so it seems our estimates are within the same bid-ask spread.
EDIT: Or rather, our bid-ask spreads intersect.
The meta-uncertain excuse doesn’t make a lot of sense to me- it’s enough that you want enough expected gain to justify the transaction cost.
Or is there some kind of rigorous notion of meta-uncertainty you’re appealing to?
Hmm. Actually, it’s because I haven’t bothered to collect all the information I could, and so my bid-ask spread serves as a confidence interval. If it were too small, then I’d actually find it probable that someone else could do the research I haven’t, figure out that the true value is on one side or the other of my interval, and exploit me.
This makes sense. So the interval at which you were willing to bet would increase given higher stakes (as that would give someone more incentive to do the research)?
What I’m trying to understand is what confidence interval means in a Bayesian context, a ‘credible interval’ seems to be the analogous concept but even after reading the article I’m still quite confused as to what a credible interval is in the context of subjective probability. I’ve seen also seen people here refer to the ‘stability’ of their beliefs- a concept which seems to function similarly. It definitely feels like it would be useful tool- it just don’t quite get what it would mean as a way of describing beliefs instead of repeatable trials.
And if we can talk about credible intervals for beliefs… isn’t that really relevant information for predictions? Shouldn’t we give intervals in addition to p values? I’m not sure it makes sense to assume normal distributions for casually calculated probabilities on one-off events. This is especially the case since humans are really, really bad at distinguishing between probabilities at extremely high and low levels.
One way to think about the bid-ask spread, is that while orthonormal’s current probability is 10%, he’d consider someone offering to bet him actual money on one side or the other to be sufficient evidence to adjust his belief significantly in that direction.
According to NOAA, 4 of the years from 1980 to 1997 were the hottest years so far of the century. So this summer has a 1⁄5 − 1⁄4 chance of being the hottest year of the century.
That’s global temperature—I’d guess US temperature has more noise.
Intrade gives 2011 a 34% chance to be the warmest year on record, so 10% seems low.
But that’s global annual temperature, not US summer temperature. The closest thing I could find to US summer temperature with a 5 minute search is the NASA GISS dataset for the average northern hemisphere land-surface temperature in June-August. The record summer high for the northern hemisphere was broken in 2010, 2005, 1998, 1995, 1990, 1988, 1987, and 1983, which also suggests that the probability of a record-breaking US summer is around 30% rather than 10%.
A little more searching turned up this NOAA/USHCN data set, which shows that the hottest summer (June-Aug) in recorded US history (contiguous 48 states, since 1895) is still 1936, so maybe 10% is closer to the truth. The 10 hottest US summers on record are 1936 (74.64 F), 2006 (74.36 F), 1934 (74.18 F), 2010 (73.96 F), 2002 (73.96 F), 1988, 2007, 2003, 1933, and 2001.
To make this needlessly precise, I fit a little model to the data and estimated that there’s a 7% chance of breaking the 1936 record and a 12% chance of topping the 2006 temperature. For the past few decades, it looks like there’s a linear trend plus some random noise. Fitting a line to the past 30 data points gives a .04/yr increase and puts the trend line at 73.35 F for 2011. The residuals have a standard deviation of .87. The record (74.64) is 1.29 degrees above the trend line for 2011, which makes it 1.48 standard deviations above the trend line. If the noise has a normal distribution, that would give a 7% chance of breaking the record (since p(z>1.48)=.07). A similar calculation gives a 12% chance of having the hottest summer of the past 70 years (breaking the 74.36 F mark set in 2006, which is 1.15 SDs above the trend line).
Thanks! I had a sense that the global warmth of recent years hadn’t necessarily translated into a record-breaking summer in the US, but I hadn’t looked into the data like this.
Since when are 10% of summers historically hot?
Since climate change began pushing up average temperatures. See for example: http://www.google.com/hostednews/afp/article/ALeqM5jbK6a-zNlRk3Az-Upzue83KHF5Bw
It’s hard to get a good sense of precisely what the probability is, given that I’m not a climate scientist, but 10% sounds about right—perhaps even a little low.
It’s not climate science, it’s mathematics. The probability of a specific number being the highest in a sequence goes down rapidly as the number of items increases. And it’s not like the temperature is doubling every year, either.
That’s only true for a stationary series, which temperature isn’t. For a random walk series you can have a 50% chance of each new observation being the highest ever in the series. For a trended series it can be higher than 50%.
Not so. It is not mathematics itself that assumes a neat random distribution. Your assertion is about climate science.
Every year of the last ten years is among the top 15 warmest years on record. The other years are ’00 ‘99, 98’, ‘97 and ’95. It seems very likely 2011 will be in the top 10, there is of course variation but you’d be crazy to expect a normal distribution. “Historically hot summer” is somewhat ambiguous but I’d say >.5 2011 is a top 5 warmest year (I don’t have summer data- we might presume more variability there). ~10% for the hottest year on record doesn’t sound crazy to me.
Okay, forget everything I just said; that probability does seem reasonable after seeing that data.
I was simply going by remembered frequencies: every year since I started paying attention I’ve heard, at least once, something of the form “This year/season/month/day was (one of) the hottest on record in Ontario/Canada/America/the world.” I therefore take the probability that at least one of these things happening to be quite high, and so the probability of specifically the U.S. having specifically a “historically hot” summer, although small, is by no means negligible. 10% is a reasonable rough estimate.
Did you know in certain parts of Europe, this winter was the first winter since 1945 where it has snowed for more than (some number) days before (some date) ?
Media like records, so they will report quantities that attain a record value.
That’s true, but irrelevant. The fact that they’re being reported doesn’t change the fact that record values are, indeed, being attained.
It depends on how natural the records in question are. If there are 100 different records to be broken, you expect every year to break one and you should never be surprised when someone reports on it.
If you are choosing random properties and finding them to be extremal with reasonable probability, then you are getting a totally different sort of data.
This is also true but irrelevant. Skatche wasn’t making predictions about whether he would be surprised by reports of records being broken. Just a specific prediction about weather.
Looks like I was underconfident in retrospect: according to the source that I agreed on when I made the prediction, 2011 was the second-warmest US summer on record.
(Even though it wasn’t the warmest on record, the fact that it came in second is evidence that I probably should have estimated this at somewhere around the Intrade global temperature figure of 34%.)
http://predictionbook.com/predictions/2098 ; what data source do you plan to use?
The NOAA’s State of the Climate report, I think; I’ll look for the September 2011 version of this news release on the June-August summer.
70%: the next Mayor of Chicago is Rahm Emanuel.
Prediction closed as correct on PredictionBook.com (see other comment for link).
EDIT
pb.com is a site which sells postage meters...?
In context it means Prediction Book.
Thank you; I had hit “Show more comments above” without effect, but hadn’t referred back to the original post.
http://predictionbook.com/predictions/2091
100% of me not caring ;0
Seriously, why is this interesting other than self-test? I probably missed something.
But if you do end up caring, your score becomes negative infinity!
http://yudkowsky.net/rational/technical
Local politics can have practical relevance for locals.
Ah. I thought you were in Connecticut.
Over the last few years, in part due to Less Wrong, I’ve stopped paying attention to short-term issues. I basically don’t consume news anymore as I don’t like wasting time learning facts that will be outdated in two-weeks. I think this is a pretty good strategy for information consumption but it means I have very few stable/reasonably distributed predictions with one-year horizons… and they’re almost all basketball predictions.
Is it worthwhile to read political and economic news just so I have more things to test my calibration on? Would reading more news even improve my predictions or just make me more/falsely confident in them?
No. You want to calibrate yourself? Run through the dozen or so trivia calibration datasets which have been linked on LW, or work your way through the >=136 predictions for 2011 on PredictionBook.com.
If I were to read the NYT like I used to, cover to cover, I think my predictions would improve. Hidden in obscure articles were many early indicators of things to come.
If you were to read it, or were to read USA Today, I think your predictions would get worse than the sort of dumb Outside View-like prediction you would otherwise have been forced to make (‘Well, I don’t know much about the Korea situation or this succession thingy, but I do know there hasn’t been an active war in, like, half a century so I’ll put a low chance of hostilities being renewed’).
During 2011, there will be (90%) various discoveries (at least 2) of phenomena that will be announced as potentially leading towards much faster computers. However, the current wall of 3 to 4 GHz for the basic CPU clock speed of consumer-level computers will (95%) remain.
Since the mainstream desktop processors of 2011 are already in development, it’s easy to make predictions about them. More cores, better voltage and frequency scaling and clock gating, improved inter-core resource sharing, and tighter CPU/GPU integration in some AMD chips. Let’s say 95% for each of those.
On mobile devices—smartphones, tablets, and so on—we’re going to see some movement toward dual-core processors, mostly ARM Cortex-A8 and (on higher-end devices) Cortex-A9. The big thing in that space for 2011, I predict, will be cheaper all-in-one chips for making low-end smartphones. Take the new BCM2157 chip, for example: it has essentially everything you need for a low-end smartphone, on a single cheap-to-produce chip. Their goal is for you to be able to buy an Android phone for less than $100, and they’ll be mass-producing and shipping phones based on that chip by mid-2011. I predict that this will only continue, and dumb phones will be gradually superseded as smart phones become cheaper and more ubiquitous.
For predictions further out, I’m betting that in five years, some of the big things will be optical interconnects, networks-on-chip, mainstream GPGPU, 3D chips, and solid state drives. SSDs are already a safe bet; the rest is more speculative.
I’d put these on PB (and give optical interconnects a low prediction since it feels like I’ve been reading about them forever), if you could operationalize them/make objective & judgeable.
PredictionBook version of the second of these.
60%: conviction of Knox and Sollecito will be reversed. (Updated from 50% after last month’s DNA ruling.) PredictionBook.
That sounds promising!
Continuing riots in major European cities over the imposition of austerity measures: 80%. There have been four such riots in the past six months—in Paris, London, Rome and Athens—and there have been vows of more to come, so this seems pretty likely.
80% seems way underconfident.
Update?
http://predictionbook.com/predictions/2131
A UAV “peeping tom” story in the British press in 2011: 50% if there hasn’t been one already.
EDIT: I mean 2011, not 2010 of course. D’oh!
Annually is not often enough to be fun. Why not make minor predictions seven days out every Sunday? You’d be able to score yourselves better.
Annually’s long enough for large-scale trends to dominate. I think we’d really have to be embedded in current events for our seven-day predictions to be much better than random, and closely following the press or news blogs is anything but fun—they deal in exaggerated significance, and filtering that out is exhausting.
A year is a long time, though. I think the ideal period might be something around quarterly.
Predictionbook seems better suited for this purpose. Also, I think longer-term predictions are more rationality-loaded.
Prediction request for emergence of usable (commercial grade or easy DIY) eyewear computing of this sort
http://www.lumus-optical.com/index.php?option=com_content&task=view&id=9&Itemid=15
http://blog.2yb.org/2010/07/cd-case-wearable-computer.html
Please feel free to state your distributions beyond 2011
This is one of the predictions I would include in my list for key trends over the span of this decade (rather than year). I might in weaker moments also conjunction fallacy it as an Apple product. I think there are some prereqs that still need to be addressed at this point including further maturation of mobile apps, some addressing of social acceptance of life tracking-style media recording, battery life, etc. So yeah.. S curve cumulative probability hitting 50% around 2020 that these achieve iPod like sales figures.
Agreed, including the Apple conjunction fallacy.
75% probability of being mainstream, or at least not unusual, by 2020. It seems like the obvious solution: phone screens are too small, laptops and even tablets are too inconvenient to carry around constantly. And I’d go 50/30/20 on the first mass market product being based on Android/Apple/other. (With Android, anybody can build it without asking for permission).
The argument for Apple is that a killer-quality device in this category would require serious UI support from the OS, possibly new interactions (eye tracking for example).
http://predictionbook.com/predictions/2103
Despite media hysterics as to its imminent demise the current coalition of parties in government in Australia will remain in power. (70%)
Newspapers in Australia will be found to be boosting subscription numbers by a significant amount to swindle advertisers, again. (40%)
http://predictionbook.com/predictions/2132
http://predictionbook.com/predictions/2133
Just some questions.
1) Why do most probabilities here in percentage divisible by 5? Is there any reason that they be, I assume, rounded to the nearest 5? If this is the case, the 65% probability means “between 62.5 and 67.75”, right? Um, maybe not. I bet some people here round the probabilities to the nearest 10.
2) I would love to see some kind of distribution as well. Can we say something like:
The distribution is skewed here, and it should be right-skewed if the probabilities involve, say, mutual agreements from a large number of independent parties. I’m not sure about the others, but it helps me imagine the picture.
3) hic Can someone make a prediction whether some of us here are time travelers (from the future)? [Got the joke?]
4) Can someone make predictions about predictions (including self)? Please.
Hello shadow, Welcome to LessWrong! I think the answers to your questions are as follows:
1) Working out exact probabilities for these kinds of predictions is unfeasible, so we approximate. Rounding to the 5 seems natural to a lot of us, and (I expect) it automatically conveys the approximate-ness of the prediction to most people.
2) Probability is in the mind. Therefore, giving a distribution of probabilities is just a way of saying “I am uncertain about how uncertain I am”.
4) Don’t think I understand this one, although this might qualify and/or interest you.
Next year:
A virus is discovered affecting Android which will create a small crisis in the mobile phone industry: 17%
A minor crisis in rare earth metals will cause an increase in the number of RE mining projects worldwide, judged by starts and investment activity: 15%
.… is caused by Chinese foreign policy, trade restrictions: 65%
.… is caused by an infrastructure disaster in China, natural or man-made: 30%
One of the largest reinsurance companies, top 10, will collapse because of underestimated basis risk: 6%
The New York art scene will be displaced as the center, process started with clear trajectory: 3%
.… resulted from further dilution of the classical collector pool by collectors driven by current prices: 65%
Next 10 years:
Governments of countries with shrinking or stable populations have shifted to using a measure other than GDP as their primary benchmark of economic growth, 80% of shrinking/stable states: 70%
“Hacker” re ‘malicious manipulator of other’s networks’ will be displaced by “hacker” re ‘someone involved in DIY projects and soft, fluffy Frauenfelderism’: 67%
Peak travel in the US will lead naturally to changes in urban design as people reduce regular travel to increase leisure travel: 53%
Worker retraining programs are developed which focus on putting people with liberal arts degrees on a trajectory to technical master’s degrees, functional in at least 3 of the 15 largest US states: 47%
We reach “peak carbon” in the developed world without a coordinated plan to do so, clear plateau in carbon emissions per person: 40%
Each second-level ”...” prediction is conditioned on the preceding first-level prediction, is that right?
Right.
“Peak travel in the US will lead naturally to changes in urban design as people reduce regular travel to increase leisure travel: 53%”
Could you elaborate on this some?
There are various processes underway that are already affecting urban, suburban, and exurban design...some of them are “style” and “trend”, some of them are result of the cost:benefit experiment that the last 20-30 yrs of housing development has been (always is, to be fair), some a result of fear or gas prices or the viability of the airline industry.
I’m curious what you see coming out of the interplay of regular travel v. liesure travel...
This percentage seems surprisingly precise. How did you arrive at this number?
One-in-six chance is 16.666..., rounding up is 17%. That’s a possibility.
Not all of the other numbers in his post seem to be derived from fractions, though, so not very confident.
I’m flattered by some of the speculations lower down. It’s not nearly so complex. I basically asked myself a series of questions about each one until I got to a rough idea by tens. The number is then adjusted by 3% or 5% with the former marking “just shy/north of” and the latter “between”. If my anxieties, both about the event and not wanting to put direct bias pressure on the number, seem satisfied, I go with it.
Reinsurance got 6% because I thought “a bit more than zero” then “more than that”.
Also seems pretty low--83% certain android on so many platforms is pretty secure?
I doubt Hyena believes that: there are already Android viruses, but none of them have caused a small crisis in the mobile phone industry.
Right. I think we might have one sufficiently dangerous or infectious that people actually care.
I know of at least one geocaching/wifi exploit to achieve non-GPS locating; a virus using that plus having some use for the phone’s location would certainly scare enough people to start a small crisis.
A major church figure will face allegations of child abuse.
Europeans will riot over reductions in social programs.
A vague new terrorist threat will lead to increased security procedures at American airports.
A conservative talk show host in America will openly endorse murder of atheists, homosexuals, or immigrants.
Video of a pop star engaged in sexual acts will be leaked to the public.
A b-list celebrity will die unexpectedly; CNN will declare this a national tragedy.
A natural disaster will strike a third world country, causing everyone to completely forget about Haiti once again.
Literally dozens of Americans, many of whom are on social security or medicare, will protest government social programs in America as socialism.
North Korea will say or do something crazy.
So will Mel Gibson.
Stores will sell out of the newest overpriced shiny Apple gadget on release day.
The sun will rise each day except in areas above 66 degrees latitude.
All of these I think I can safely predict with 95% confidence. They aren’t the most earth shattering, but I’m rational enough to know my predictive limits.
Actually this seems pretty overconfident. Would you take an even money bet on the conjunction of all of them?
I guess I should have said 95% confidence on each of them rather than all of them. I would take 10 to one odds on any of them individually, and probably even money on all of them, depending on how the predictions were formalized. (IE instead of “A b-list celebrity will die unexpectedly; CNN will declare this a national tragedy.” “CNN will devote X hours of news time to the death of an actor who has not starred in a movie grossing over Y million in the last Z years, or a musician who has not made it onto the Billboard top 100 in at least Z years.”
Out of curiosity, which ones would you think most likely to turn out wrong and lose the bet for me?
These all sound somehow familiar… ;)
Oil price will go over $100/barrel at some time this year: 80% World oil production (crude+condensate) will not exceed 2008 levels in any month of this year: 95%
No human-level (Turing-test passing) AGI this year: 99.5%
Kim Jong Il steps down or dies this year: 60%
Some physical phenomenon will be observed in the next 10 years that points physicists in a more fruitful direction than string theory: 50% (This last is pessimistic, because from the point of view of experimental falsifiability, practically ANYTHING would be better than string theory. I’m just positing that it may take 10 years for that anything to show up).
Oil over $100/bbl: http://predictionbook.com/predictions/2122
World oil production sounds like it would be a real pain to verify, so unless you have a specific data source in mind, I think I’m going to omit it—I certainly don’t want to spend hours googling and ad-hoc educating myself about world oil production until I can make a guess at how to judge the prediction.
AGI seems like a dupe of http://predictionbook.com/predictions/2092
Kim: http://predictionbook.com/predictions/2123
Physics: omitting because don’t know how one would judge it.
EDIT: oil has already been closed as correct. I was probably underconfident there—oil has such volatility in recent years one should expect it to go over $100/bbl even with quasi-efficient markets in mind.
Thanks
2 - I’ve put up a more precise prediction http://predictionbook.com/predictions/2134
5 - I’ve put up a more precise prediction asserting a Nobel prize win for a non-string unification theory by end of 2020.
5 - http://predictionbook.com/predictions/2135
No offense, but I think you are dramatically, shockingly, overconfident here at 50%. Have you ever looked at the time lag for Nobel Prizes? The lags tend to start at a few decades. Look at the recent Prize for fiber optics—that work was done, like, 50 years ago.
So, you expect a new theory to be created and worked out in all its details, get experimental support, go through the usual Kuhnian progress, and be ratified by a Nobel Prize in 10 years?
Wow yeah; good catch. You are of course right about the time lag, which I hadn’t researched when I pulled that guess out of my arse. I think I’ll retract this prediction.
RIM’s QNX-based OS7 makes significant impacts on the Mobile Market, with a partially open-architecture to rival Google Android.
#2. Google Me, a competitor to Facebook is finally launched. Utilizes an open-source framework like Diaspora. Not many people pay attention until later in the year.
#3. Diaspora makes a second, more full-fledged launch beyond that awkward launch we saw a few months ago, and Facebook buys them (as to be ready for Google Me).
#4. 3D mobile devices (cell phone) are released.
#5. Huge technological advancements in RNAi (ribonucleic acid interference) research thanks to genomics research initiatives (Complete Genomics, 23andme, etc.), allows for more testing on mice, plants. The era of more advanced, preventative medicine through technology begins.
95% confident that we will be no closer to knowing if P equals NP a decade down the line.
This is too imprecise for me to know what you’re asserting. I would guess that if we ask, say, Scott Aaronson in 2021 whether we are any closer to knowing if P equals NP than we were in 2011, he will say “yes”.
“No closer” may not be the ideal choice of words.As it stands your prediction would be invalidated every time someone publishes a paper demonstrating a technique that doesn’t hep solve P ?= NP. That is one less dead end to explore...
If this is made more precise in a way similar to how cyphergoth has done so, I’d be willing to bet a lot against that claim.
California will implement austerity measures similar to the ones currently being implemented by European countries: 80%.
The bubble underlying the current Chinese boom will collapse: 35%.
Some European country will abandon the Euro: 20%.
too vague
What does a Chinese bubble collapse look like? Suggested operationalizations: Shanghai or Hong Kong Stock Exchange prices, GDP growth
effective dupe of http://predictionbook.com/predictions/1374 or http://predictionbook.com/predictions/2031
The Chinese bubble is certainly going to collapse, but I doubt it will be a sudden enough collapse to happen within the year. People can talk all they want about undervalued currency or export dependency, my money is on demographic echo from the one child policy, and ecological and agricultural collapse from industrial pollution, both of which would be on the scale of a decade or more instead of a year. Though a smaller bursting of the bubble could happen due to general global economic downturn, the real kicker is still down the road a few years.
I think you’re suffering from availability bias. You can easily picture polluted countryside or a population crash, whereas “undervalued currency or export dependency” sound like a minor point of abstract economics.
You may want to look at this article to get a feel for the extent to which Chinese urban policies are driven by a desire to project an image rather then any internal sensible policies. For example, people who aren’t born residents aren’t allowed to move into Shanghai or Beijing while a third of the newly constructed buildings remain empty.
I’m not saying the near-term economic woes won’t hurt China or bust some of their economic bubble. I just think these are less likely to be profoundly crippling. The urban development issues you mention are part of what’s leading to China’s environmental troubles, and will have bigger impacts than just near term economic imbalance.
I’d personally put the probability of a country abandoning the Euro this year at <5%. I think the major European powers (e.g. Germany and France) are still committed enough to the monetary union to try to make things work out. However, if corrective action fails or is rejected by the voters of southern Europe, then I think we’ll see a greater willingness to abandon the Euro by all parties.
EDIT: This raises the related question of, “What is the probability that Greece, Spain, Portugal and Ireland will agree to and implement sufficient austerity measures to prevent a breakup of the Euro?”
Is this year or decade?
All three seem ridiculously high for next year, once vagueness is corrected.
This year.
Why do you think they’re too high?
RIM’s QNX-based OS7 makes significant impacts on the Mobile Market, with a partially open-architecture to rival Google Android. − 80%
#2. Google Me, a competitor to Facebook is finally launched. Utilizes an open-source framework like Diaspora. Not many people pay attention until later in the year. − 75%
#3. Diaspora makes a second, more full-fledged launch beyond that awkward launch we saw a few months ago, and Facebook buys them (as to be ready for Google Me). 65%
#4. 3D mobile devices (cell phone) are released. − 98%
#5. Huge technological advancements in RNAi (ribonucleic acid interference) research thanks to genomics research initiatives (Complete Genomics, 23andme, etc.), allows for more testing on mice, plants. The era of more advanced, preventative medicine through technology begins. − 77%
ioip_n
Your font size is excessive.
This comment will be downvoted for being self-conscious: 20%.
This comment will be upvoted for the sake of irony: 35%.
This comment will be downvoted for attempting to get upvotes: 30%.
This comment will be upvoted for being explicit about that fact: 15%.
This comment will be downvoted for being explicit about being explicit about that fact: 15%.
I will regret posting this comment: 65%.
Downvoted because I don’t think I want to see more comments like it.
I ended up posting it out of sincere curiosity regarding whether it would go up or down. But I suppose it did amount to spam; I accept my downvotes with no unhappiness.
I find it somewhat troubling that my flip reply to your comment has netted me more karma than any of my other recent contributions.
I upvoted because I hope it settles at 0 points, making your whole comment look merely silly. I think the chances of that happening are ~11.293%
I would place the probability lower than that. −11 is a long way to come back from when the comment has already faded into history. Even if my upvote just moved it to −10 for the same reason. :)
Massively underconfident. I downvoted for self-consciousness, then I reconsidered and upvoted for the irony. But then I downvoted because I felt you were attempting to get upvotes, then I revised and upvoted for being explicit in that attempt. Then, later, I downvoted for being explicit about your explicitness.
Basically, you should have expected high 90s for all of those events. As it stands, your percentages would require you to offer even-money bets for any of those events except regretting, all of which you’d lose.
Lower confidence was to account for the fact that some people would read it and decline from voting either way; therefore P>50% of upvote would not imply P<50% of downvote. In hindsight it’s a confusing scheme to parse into bets. Everyone who read (e.g.) the first prediction and didn’t vote would count in the 80% who didn’t downvote it for being self-conscious. The first five percentages were, in my mind, predictions concerning the distribution of the actions of those who read the comment, where ‘didn’t vote’ also counts toward the union. But regarding betting, there’s no way to get the data of how many actually read it.
Downvoted to disincentivize this type of thing; it strikes me as karma-whoring.
Any sufficiently advanced karma-whoring is indistinguishable from a useful comment. I personally don’t care for karma, but I maintain that I regret the post for wasting people’s time.
I don’t believe there are any real karma-whores on Less Wrong. I’m detailing my beliefs here in an attempt to accurately signal my ability to think about things; I presume it follows that anyone who can think for more than four seconds shouldn’t actually continue to gain pleasure from getting karma for stupid comments. I attempt to signal this because I would not myself wish to learn of the existence of karma-whores on Less Wrong and assume you are the same.
Note the (tenuous) irony; I predicted such criticisms of the post as I wrote it! I hoped people would enjoy reading it; not make conclusions about karma-whoring, which would be bad because I do not gain anything by learning that I have made the readers of Less Wrong unhappy. I do further wonder how many up- or down-votes the first N predictions unaccompanied would have garnered, but I won’t tempt fate by doing trials.
Upvoted for indistinguishable insight. Downvoted for the overused and inaccurate “don’t care about karma” signal. Downvoted 7 other comments by you at random because you don’t care and I’m in an arbitrary mood. :)
I also downvoted Normal’s comment because the “karma-whoring” comment was glaringly inaccurate.
I care about antagonising people and wasting their time, so naturally I pay attention to karma as it’s a reliable signal ;) But of itself it’s pretty useless; given the chance, I wouldn’t choose to press a button that bestowed 1000 magical karma points on my account.
You’d pass up the chance to study ontologically fundamental mental entities?!
That is the price of such an intense desire to signal one’s apathy toward karma! :P My loss, I suppose!
P.S. Luminosity + Radiance rules!
It was an interesting idea. I approve of this kind of meta-comment in general; I just don’t want it to become a bigger part of the comment pool and/or a way of accumulating karma. I do care about the karma system because I think it’s useful to know what intelligent people think of me (and I get a fuzzy feeling from positive reinforcement).
You assume correctly. I hope there aren’t any real karma whores either. I don’t really think of you as one, just of that sort of comment as the sort of thing a karma whore would do.
I did enjoy reading it, to a limited extent. That and the insightful, useful nature of the parent make this interaction a net gain for me. In conclusion, I upvoted the parent.
Huzzah!
Yes, I do agree that getting karma for pleasing but unproductive comments lessens the utility of karma; should be more of a costly signal for a individual’s utility to the community, where the criterion of upvote-selection is important (i.e. ‘propagates rationality’ is presumably most desirable). Upvotes for cheap jokes dampens the signal.
Downvoted because this kind of self-reference humor is old-hat here.
(actually I didn’t downvote because −14 is enough, but I agree with the downvoters)