Shock Levels are Point Estimates
This is a post from my blog, Space and Games. Michael Vassar has requested that I repost it here. I thought about revising it to remove the mind projection fallacy, but instead I left it in for you to find.
Eliezer Yudkowsky1999 famously categorized beliefs about the future into discrete “shock levels.” Michael Anissimov later wrote a nice introduction to future shock levels. Higher shock levels correspond to belief in more powerful and radical technologies, and are considered more correct than lower shock levels. Careful thinking and exposure to ideas will tend to increase one’s shock level.
If this is really true, and I think it is, shock levels are an example of human insanity. If you ask me to estimate some quantity, and track how my estimates change over time, you should expect it to look like a random walk if I’m being rational. Certainly I can’t expect that my estimate will go up in the future. And yet shock levels mostly go up, not down.
I think this is because people model the future with point estimates rather than probability distributions. If, when we try to picture the future, we actually imagine the single outcome which seems most likely, then our extrapolation will include every technology to which we assign a probability above 50%, and none of those that we assign a probability below 50%. Since most possible ideas will fail, an ignorant futurist should assign probabilities well below 50% to most future technologies. So an ignorant futurist’s point estimate of the future will indeed be much less technologically advanced than that of a more knowledgeable futurist.
For example, suppose we are considering four possible future technologies: molecular manufacturing (MM), faster-than-light travel (FTL), psychic powers (psi), and perpetual motion (PM). If we ask how likely these are to be developed in the next 100 years, the ignorant futurist might assign a 20% probability to each. A more knowledgeable futurist might assign a 70% probability to MM, 8% for FTL, and 1% for psi and PM. If we ask them to imagine a plethora of possible futures, their extrapolations might be, on average, equally radical and shocking. But if they instead generate point estimates, the ignorant futurist would round the 20% probabilities down to 0, and say that no new technologies will be invented. The knowledgeable futurist would say that we’ll have MM, but no FTL, psi, or PM. And then we call the ignorant person “shock level 0″ and the knowledgeable person “shock level 3.”
So future shock levels exist because people imagine a single future instead of a plethora of futures. If futurists imagined a plethora of futures, then ignorant futurists would assign a low probability to many possible technologies, but would also assign a relatively high probability to many impossible technologies, and there would be no simple relationship between a futurist’s knowledge level and his or her expectation of the overall amount of technology that will exist in the future, although more knowledgeable futurists would be able to predict which specific technologies will exist. Shock levels would disappear.
I do think that shock level 4 is an exception. SL4 has to do with the shocking implications of a single powerful technology (superhuman intelligence), rather than a sum of many technologies.
- 5 Feb 2011 23:43 UTC; 0 points) 's comment on Fast Minds and Slow Computers by (
I think a better analogy to shock levels is not predictions about the future, but rather, the ability to think about a certain class of predictions without generating large amounts of emotion. Hence, being able to predict that my Shock Level will go up is really not a violation of conservation of expected evidence, because my Shock Level is really a property of emotional reactions, not beliefs. If I practice standing right in front of cliffs every day for a month, I can predict reasonably confidently that I’ll have less fear of heights at the end of the month than at the beginning- same principle.
Well, the original concept of shock levels was defined in terms of what ideas you are comfortable with (hence the word ‘shock’). What you believe in is a different matter. As a former Singularitarian, I’m comfortable talking about as wild a scenario as you care to come up with; doesn’t mean I believe it’s actually going to happen.
That having been said, and suppose we adopt your definition in this context for the sake of argument, the paradox is easy to resolve. “Higher shock levels correspond to belief in more powerful and radical technologies, and are considered more correct than lower shock levels.” Considered more correct by whom? By people at higher shock levels. In other words, Alice (SL4) thinks every step Bob (SL0) takes in the direction of higher shock levels makes him more correct. This is not surprising! Symmetrically, Bob thinks every step Alice takes in the direction of lower shock levels makes her more correct. (And yes, people do take steps in both directions.) Thus, we cannot deduce the correct conclusion merely by looking at steps on a graph—as we should not be able to. The paradox disappears.
Insanity, or ignorance. I don’t think most people expect their own shock levels (defined this way) to go up, or know that others’ shock levels mostly go up. If Alice knows more than Bob, and Bob doesn’t know about Alice, is it insane for Alice to be able to predict how Bob’s beliefs will change as he learns the things she knows?
(Not to say that people aren’t insane, of course. Besides point estimation, fighting a rearguard retreat against the evidence seems like a big factor.)
Computer programmers with decades of experience still systematically underestimate how much time it will take to write and debug a program. The time-to-completion estimate for every project goes up over the course of the project.
I don’t think most people even consider the possibilities that you suggest they are dismissing due to their being point estimates.
Shock Levels are partly about what kind of technology is possible, though, and since we never find that something we’ve been doing is impossible, there’s a built-in ratchet. Your random walk would only be possible for a time if your initial estimates were extremely high.
I wonder how many people didn’t read the third sentence?
I didn’t, nor I think the second. I think I jumped into skimming mode when I learned that this was a repost, even though the first paragraph was obviously new. Or maybe it primed people to look for mistakes, but by the time they reached the end, they forgot. I think that there’s something here to learn about writing.
Had you already read the post on Peter’s blog?
The third sentence is actually pretty interesting btw.
Yes, I had, but now I think it’s not that I entered skimming mode as that I skipped ahead to find out if it was a post I had read. Another possibility—which applies to people who had never seen Peter’s blog—is that I decided that administrative paragraphs are boring.
Or ignorance. If Alice knows more than Bob, and Bob doesn’t know about Alice, there’s nothing insane about Alice being able to predict how Bob’s beliefs will change as he learns what she knows.
(That said, insanity is certainly involved as well — point estimation, and fighting a rearguard retreat against the evidence, at least.)
Let’s work out the ranges:
Both futurists should, of course, expect (0.8) technologies of radical change. The ignorant one with roughly 41% chance of none of them, 41% chance of 1, 15% chance of 2, 2% chance of 3, and 0.2% chance of all 4.
The more realistic, but still outrageously high chances for all but MM, chances, give 27% chance for nothing, 66% chance of 1, 7 % chance of 2, and .1% chance for 3 or more, and negligible chance for 4.
I like the post and the links. I do get the impression that this post is talking about a somewhat different thing than the ‘shock level’ as Eliezer1999 described them.