Reference class of the unclassreferenceable
One of the most useful techniques of rationality is taking the outside view, also known as reference class forecasting. Instead of thinking too hard about particulars of a given situation and taking a guess which will invariably turned out to be highly biased, one looks at outcomes of situations which are similar in some essential way.
Figuring out correct reference class might sometimes be difficult, but even then it’s far more reliable than trying to guess while ignoring the evidence of similar cases. Now in some situations we have precise enough data that inside view might give correct answer—but for almost all such cases I’d expect outside view to be as usable and not far away in correctness.
Something that keeps puzzling me is persistence of certain beliefs on lesswrong. Like belief in effectiveness of cryonics—reference class of things promising eternal (or very long) life is huge and has consistent 0% success rate. Reference class of predictions based on technology which isn’t even remotely here has perhaps non-zero but still ridiculously tiny success rate. I cannot think of any reference class in which cryonics does well. Likewise belief in singularity—reference class of beliefs in coming of a new world, be it good or evil, is huge and with consistent 0% success rate. Reference class of beliefs in almost omnipotent good or evil beings has consistent 0% success rate.
And many fellow rationalists not only believe that chances of cryonics or singularity or AI are far from negligible levels indicated by the outside view, they consider them highly likely or even nearly certain!
There are a few ways how this situation can be resolved:
Biting the outside view bullet like me, and assigning very low probability to them.
Finding a convincing reference class in which cryonics, singularity, superhuman AI etc. are highly probable—I invite you to try in comments, but I doubt this will lead anywhere.
Or is there a class of situations for which the outside view is consistently and spectacularly wrong; data is not good enough for precise predictions; and yet we somehow think we can predict them reliably?
How do you reconcile them?
- 19 Apr 2012 22:54 UTC; 123 points) 's comment on A question about Eliezer by (
- “Outside View!” as Conversation-Halter by 24 Feb 2010 5:53 UTC; 93 points) (
- Advancing Certainty by 18 Jan 2010 9:51 UTC; 44 points) (
- 3 Apr 2013 20:34 UTC; 38 points) 's comment on Open Thread, April 1-15, 2013 by (
- Generalizing from One Trend by 18 Jan 2013 1:21 UTC; 25 points) (
- 31 Aug 2010 7:31 UTC; 13 points) 's comment on Harry Potter and the Methods of Rationality discussion thread, part 2 by (
- 14 Jan 2010 9:12 UTC; 4 points) 's comment on High Status and Stupidity: Why? by (
- 26 Feb 2019 20:48 UTC; 4 points) 's comment on Where to find Base Rates? by (
- 19 Jan 2010 10:15 UTC; 3 points) 's comment on Tips and Tricks for Answering Hard Questions by (
- 15 Jan 2010 3:14 UTC; 2 points) 's comment on Back of the envelope calculations around the singularity. by (
- 23 May 2011 3:07 UTC; 2 points) 's comment on How To Be More Confident… That You’re Wrong by (
- 30 Apr 2013 16:37 UTC; 0 points) 's comment on New report: Intelligence Explosion Microeconomics by (
- 12 Jan 2010 10:22 UTC; 0 points) 's comment on The things we know that we know ain’t so by (
Meta question here: why does reference-class forecasting work at all?
Presumably, the process is that when you cluster objects by visible features, you also cluster all their invisible features too, and the invisible features are what determines the time evolution of those objects.
If the category boundary of the “reference class” is a simple one, then you can’t fool yourself by interfering with the statistical correlation between visible and hidden attributes.
For example, reference class forecasting predicts that cryo will not work because cryo got clustered with all the theistic religious afterlives, and things like the alchemists’ search for the Elixir of Life. The visible attribute we’re clustering on is “actions that people believe will result in an infinite life, or >200 year life”.
But a cryonics advocate might complain that this argument is trampling roughshod over all the careful inside view reasoning that cryonicists have done about why cryo is different than religion or superstition: namely that we have a scientific theory for what is going on.
If you drew the boundary around “Medical interventions that have a well accepted scientific theory backing them up”, then cryo fares better. The different boundaries you can draw lead to focus upon different hidden attributes of the object in question: cryonics is like religion in some ways, but it is also like heart transplants.
I suggest a reference class “Predictions of technologies which will allow humans to recover from what would formerly have been considered irreversible death”, with successful members such as heart transplants, CPR, and that shock thing medical shows are so fond of. (You know, where they shout “Clear!” before using it.)
Having got 15 net upvotes but no replies, I feel an obligation to be my own devil’s advocate: All three of my examples deal with the heart, which is basically a pump with some electric control mechanisms. Cryonics deals with the brain, which works in very different ways. It follows that, unless we can come up with some life-prolonging techniques that work on the brain, my suggested reference class is probably wrong.
That said, we do have surgery for tumours and some treatments to prevent, reduce in severity, and recover from stroke. Again, though, these deal with the mechanical rather than informational aspects of the brain. I do not care to hold up lobotomy as life-prolonging. Does anyone know of procedures for repairing or improving the neural-network part of the brain?
An example regarding the brain would be successful resuscitation of people who have drowned in icy water. At one time they would have been given up for dead, but now it is known that for some reason the brain often survives for a long time without air, even as much as an hour.
Repairing? To what? How can you tell what the original setup was? Improving? Same problem? What is considered an improvement? I guess that might be subjective. In my opinion imaging techniques will make cryonics disappear once the captured information is enough for neural-network reconstruction.
So to sum up, you think you have a heuristic “On average, nothing ever happens for the first time” which beats any argument that something is about to happen for the first time. Cases like the Wright Brothers (reference class: “attempts at heavier-than-air flight”) are mere unrepeatable anomalies. To answer the fundamental rationalist question, “What do you think you know and how do you think you know it?”, we know the above is so because experiments show that people could do better at predicting how long it will take them to do their Christmas shopping by asking “How long did it take last time?” instead of trying to visualize the details. Is that a fair summary of your position?
“On average, nothing ever happens for the first time” is an erroneous characterization because it ignores all the times where the predictable thing kept on happening. By invoking the first time you restrict the reference class to those where something unusual happened. But if usually nothing unusual happens (hmm...) and those who predict the unusual are usually con artists as opposed to genius inside analyzers (is this really so unreasonable a view of history?), then he has a point.
“Smart people claiming that amazing things are going to happen” sometimes leads the way for things like the Wright Brothers, but very often nothing amazing happens.
Sure. But then the question becomes, are we really totally surprised without benefit of hindsight? Can we really do no better than to predict that no flying machine will ever be built because no flying machine ever has been? The sin of underconfidence seems relevant here; like, if it’s not a sin to try and do better, we could do a bit better than if we were blind to everything but the reference class.
Can we really do no better than to predict that no perpetual motion machine will ever be built because no perpetual motion machine ever has been?
But the fact that no perpetual motion machine has been built is not the reason we believe the feat to be impossible. We have independent, well-understood reasons for thinking the feat impossible.
As Robin Hanson has pointed out, thermodynamics is not well understood at all.
Conservation of energy is more basic than thermodynamics.
But you illustrate my point; it seems possible to discriminate between the probabilities we assign to perpetual motion machines, especially those built from classical wheels and gears without new physics, and flying machines, even without benefit of hindsight.
Indeed, it is obvious that heavier than air flight is possible, because birds fly.
Everyone in the past who has offered a way to “cheat death” has failed miserably. That means that any proposed method has a very low prior probability of being right. There are far more cranks than there are Einsteins and Wright Brothers. The set of “complete unknowns who come out of nowhere and make important contributions” is nearly empty—the Wright Brothers are the only example that I can think of. Even Einstein wasn’t a complete unknown coming from outside the mainstream of physics. Being a patent clerk was his day job. Einstein studied physics in graduate school, and he published many papers in academic journals before he had his Miracle Year. So no, I wouldn’t have believed that the Wright Brothers could make an airplane until they demonstrated that they had one.
And it’s often futile to look at the object-level arguments. It’s not that hard to come up with a good sounding object-level argument for damn near anything, and If you’re not an expert in the relevant field, you can’t even distinguish well-supported facts from blatant lies.
I entertain the notion that outside view might be a bad way of analyzing some situations, the post is a question on what this class might look like, and how do we know a situation belongs to such class? I’d definitely take outside view as a default type of reasoning—inside view by definition has no evidence of even as little as lack of systemic bias behind it.
The way you describe my heuristic is not accurate. There are cases where something highly unusual happen, but these tend to be extremely difficult to reliably predict—even if they’re really easy to explain away as bound to happen with benefit of hindsight.
For example I’ve heard plenty of people being absolutely certain that fall of the Soviet Union was virtually certain and caused by something they like to believe—usually without even the basic understanding of facts, but many experts make identical mistake. The fact is—nobody predicted it (ignoring background noise of people who “predict” such things year in year out) - and relevant reference classes showed quite low (not zero, but far lower than one) probability of it happening.
Everyone I knew from the Intelligence community in 1987 − 1989 were of the opinion that the Soviet Union had less than 5 years, 10 at tops. Between 1985 and 1989, they had massive yearly increases in the contacts from Soviets either wishing to defect or to pass information about the toppling of the control structures. None of them were people who made yearly predictions about a fall, and every one of them was not happy about the situation (as every one of us lost our jobs as a result). I’d hardly call that noise.
Is this track record documented anywhere?
Probably not. I could probably track down an ex-girlfriend’s brother who was in the CIA, who also had looming fears dating from the mid-80s (He’s who explained it to me, orginally)...
Now, there may be books written about the subject (I would expect there to be a few), but I can’t imagine anyone in any crowd I have ever hung with being into them. I’ll check with some Military Historians I know to see.
Edit: After checking with a course from the Journal of International Security, he says that there is all kinds of anecdotal evidence of guys standing around the water cooler speculating about the end of the Cold War (on all Mil/Intel fronts), yet there are only two people who made any sort of hard prediction (and one of those was kinda after the fact—I am sure that will draw a question or two. The after the fact guy was from Stanford, he will forward a name as soon as he checks his facts).
He also says that all sorts of Policy Wanks managed to pull quotes from past papers citing that they had predicted such a thing, yet if one examines their work, one would find that they also had made many other wild predictions regarding the Soviet Union eventually eclipsing the West.
Now that I have looked into this, I am anxious to know more.
OH! As for the defection rates. Most of that is still classified, but I’d bet that there is some data on it. I completely forgot to ask about that part.
The Outside View’s Domain
Not ‘by definition’; if you justify using IV by noting that it’s worked on this class of problems before, you’re still using IV. Semantic quibbles aside, this really sounds to me like someone trying to believe something interpersonally justifiable (or more justifiable than their opponent), not be right.
What objective source did you consult to find the relevant reference classes or to decide who was noise? Is this a case of “all sheep are black and there is a 1% experimental error”?
Would you buy:
“After something happens, we will see the occurrence as a part of a pattern that extended back before that particular occurrence.”
The Wright Brothers may have won the crown of “first”, but there were many, many near misses before. http://en.wikipedia.org/wiki/First_flying_machine
And if superintelligence were created tomorrow, people would choose new patterns and say exactly the same thing, and they’d probably even be right. So what?
The original article went too far in the direction of “the future will be like the past”, but you may have overcorrected.
Was it you who said something like “The future will stand in relation to the past as a train smoothly pulling out of a station—and yet prophesy is still difficult.”?
Scavenging the past for preexisting patterns isn’t as sexy as, say, working out scenarios for how the world might end in the future, recursively trying to understand understanding, or prophesying the end of prophesy. Because it’s not as sexy, we may do too little of it.
Trying to understand patterns on a sufficiently deep level for them to be stable, and projecting those patterns forward to arrive at qualitative and rather general predictions not involving e.g. happy fun specific dates, is just what I try to do… which is here dismissed as “the Inside View” and rejected in favor of “that couldn’t possibly happen for the first time”, which is blessed as “the Outside View”.
Have you had any successes?
http://lesswrong.com/lw/ri/the_outside_views_domain/
http://lesswrong.com/lw/vz/the_weak_inside_view/
Also: http://lesswrong.com/lw/rj/surface_analogies_and_deep_causes/
So you’re basically taking extreme version of position 3 from my list—rejecting outside view as very rarely applicable to anything. Am I right?
Works great when you’re drawing from the same barrel as previous occasions. Project prediction, traffic forecasts, which way to drive to the airport… Predicting the far future from the past—you can’t call that the Outside View and give it the privileges and respect of the Outside View. It’s an attempt to reason by analogy, no more, no less.
I certainly do. It’s my strong impression that so does almost everyone outside of the Less Wrong community and a majority of people in this community, so according to the outside view of majoritarianism I’m probably right.
Taleb’s “The Black Swan” is basically a treatise on failures from uses of the outside view.
It sometimes seems to me that the issue of how much trust to accord outside views constitutes the primary factional division within this community, separating it into two groups that one might call the “Hansonians” and the “Yudkowskians” (with the former trusting outside views—or distrusting inside views—more than the latter).
I share Michael Vassar’s impression about the statistical distribution of these viewpoints (I’m particularly expecting this to be the case among high-status community members) , but an actual survey might be worth conducting.
Of course there are a lot of failures from uses of the outside view. That is to be expected. The problem is that there are a lot more failures from uses of the inside view.
Citation needed (for the general case).
I hereby assign all your skepticism to “beliefs the future will be just like the past” with associated correctness frequency zero.
PONG.
Your move in the wonderful game of Reference Class Tennis.
As you can see from his response above, “These were slow gradual changes over time...” he is not saying that the future will be just like the past. There are plenty of ways that the future could be very different from the past, without superpowerful AI, singularities, or successful cryonics. So your reference class is incorrect.
Well taw is saying that the future will be just like the past in that the future will have slow gradual changes over time. I guess an appropriate response to that idea is Surprised by Brains.
There could also be fast sudden changes in a moment, without AI etc. So he isn’t necessarily saying that, he was just pointing out that in those particular cases, those changes were slow and gradual.
For most of human history, the future pretty much was like the past. It’s not hard to argue that, between the Neolithic Revolution and the Industrial Revolution, not all that much really changed for the average person.
Things that still haven’t changed:
People still grow and eat wheat, rice, corn, and other staple grains.
People still communicate by flapping their lips.
People still react to almost any infant communications or artistic medium in the same way: by trying to use it for pornography and radical politics, usually in that order.
People still fight each other.
People still live under governments.
People still get married and live in families.
People still get together in large groups to build impressive things.
People still get sick and die of infectious disease—and doctors are still of questionable value in many cases.
You’re only talking about human history. The history of the world is much longer. You’re also ignoring the different rates of change between genes, brains, agriculture, industry, and computation.
ETA: You edited your comment while I was typing mine.
You typed that. Is this a joke?
And not much changed between the extinction of the dinosaurs and the beginnings of human culture, either.
returns ball
Isn’t that an excellent example of how a reference class forecast can fail miserably?
“Not much changed between 65,000,000 years ago and 50,000 years ago, therefore not much will change between 50,000 years ago and now.” is basically the argument, but notice that we’ve had lots of changes within the past few hundred years, let alone the last 50,000.
The said argument doesn’t give certainties, it only gives you chances of something happening in the next 50,000 years based on what happened in the past—the chance correctly being extremely low.
Chance of event more extreme than anything ever happened before depends on your sample size. If your reference class is tiny, you need to assign high probability to extreme events; if your class is huge, probability of an extreme event is low. (The main complication is that samples are almost never close to being independent, and figuring out exact numbers is really difficult in practice. I’m not going to get into this, there might be some estimation method for that based on meta-reference-classes.)
Downvoted because I wanted to hear more about why it belongs in that reference class.
It doesn’t. I simply don’t believe in Reference Class Tennis. Experiments show that the Outside View works great… for predicting how long Christmas shopping will take. That is, the Outside View works great when you’ve got a dozen examples that are no more dissimilar to your new case than they are to each other. By the time you start trying to predict the future 20 years out, choosing one out of a hundred potential reference classes is assuming your conclusion, whatever it may be.
How often do people successfully predict 20 years out—let alone longer—by picking some convenient reference class and saying “The Outside View is best, now I’m done and I don’t want to hear any more arguments about the nitpicky details”?
Very rarely, I’d say. It’s more of a conversation-halter than a proven mode of thinking about that level of problem, and things in the reference class “unproven conversation halter on difficult problems” don’t usually do too well. There, now I’m done and I don’t want to hear any more nitpicky details.
Economics growth and resource shortages. Many times it’s seemed like we’re imminently going to run out of some resource (coal in the 1890s, food scares in the 60s, global cooling, peak oil) and economic growth would grind to a halt. The details supported the view (existing coal seams were running low, etc.) but a reference class of other 20 year periods after 1800 would have suggested, correctly, that the economy would continue to grow at about 2-3%.
Alternatively, politics. Periodically it seems like one party has achieved a permanent stranglehold on power- the republican revolution, Obama a year ago, the conservatives in 1983, Labour in 1945, 1997 – but ignoring the details of the situation, and just looking at other decades, we’d’ve guessed correctly that the other party would rise again.
Recessions. While going into a recession, it always appears to be the Worst Thing Ever, and to signal the End of Capitalism; worse than 1929 for sure. Ignoring the details and looking at other recessions, we get a better, more moderate prediction.
These all seem like good examples of Outside-View-based long-term forecasting, though they could well have been somewhat cherry-picked. That is, you are citing a group of cases where things did in fact turn out the same way as last time.
Suppose we consider nuclear weapons, heavier-than-air powered flight, the Cold War, the Cold War’s outcome, the moon landing, the dawn of computers, the dawn of the Internet, &c. What would the Outside View have said about these cases? How well did smart Inside Viewers like Nelson or Drexler do on “predicting the rise of the Internet”, or how well did Szilard do on “predicting nuclear detente”, relative to anyone who tried an Outside View? According to TAW, the Outside View is “that’s never happened so it never will happen”, and honestly this is usually what I hear from those formally or informally pleading the Outside View! It also seems to have been what Szilard heard as well. So while Nelson or Drexler or Szilard should have widened their conference intervals, as I advocate in “The Weak Inside View”, they did better than the so-called Outside View, I’d say.
None of those inventions were big enough to change our larger reference classes: flight didn’t push up trend GDP growth, nuclear weapons didn’t change international relations much (a country is more powerful in proportion to its GDP, spending on military and population), the end of the cold war didn’t bring world peace. Rather, the long run trends like 3% growth and a gradual reduction in violence have continued. All the previous game-changers have ended up leaving the game largely unchanged, possibly because we adapt to them (like The Lucas critique). If all these inventions haven’t changed the fundamentals, we should be doubtful FAI or uploads will either.
In short: the outside view doesn’t say that unprecedented events won’t occur, but it does deny that they’ll have a big change.
A better counter-example might be the industrial revolution, but that’s hardly one event.
WTF?!? Nukes didn’t change international relations? We HAVE world peace. No declarations of war, no total wars. Current occupations are different in kind from real wars.
Also, flight continued a trend in transport speeds which corresponded to continuing trends in GDP.
“We HAVE world peace”—I get your meaning, but I think we should set our standards a bit higher for “peace.”
Compare now to Pax Britannia or Pax Romana. The general trend towards peace has continued, and there are still small wars. Also, I hardly think the absense of a declaration is particularly significant.
Exactly- flight continued a pre-existing trend; it didn’t change history.
Seems to who? I’ve never noticed anyone taking this opinion.
http://www.google.com/search?q=permanent%20republican%20majority
http://www.google.com/search?q=permanent+democratic+majority
Hmm. I think I would have preferred to italicize “noticed” rather than what you did.
Perhaps. But I am far more annoyed by people who know better throwing around absolute terms, when they also know counterexamples are available in literally 3 or 4 seconds—if they would stop being lazy and would just look.
(I’m seriously considering registering an account ‘LetMeFuckingGoogleThatForYou’ to handle these sorts of replies; LW may be big enough now that such role-accounts are needed.)
Sockpuppetry considered harmful.
“Considered Harmful” Considered Harmful
The absolute terms were appropriate, referring as they did only to my personal experience. It was only intended as a weak, throwaway comment. I suppose you might be annoyed that I think such anecdotes are worthy of mention.
Edited to add: If you’d quoted instead “Seems to who?” I wouldn’t have found your comment at all objectionable.
Already done: JustFuckingGoogleIt
You can link to searches with Let Me Google That for You
I’ve seen Arnold Kling, GMU economics blogger (colleague of Robin Hanson, I think), argue something like that.
This was the example that first sprung to mind, though recently he’s admitted he’s not so sure.
Anyone who predicts a stranglehold on politics lasting longer than a decade is crazy. Not that it doesn’t happen, but you can’t possibly hope to see that far out. In 1997 I thought Labour would win a second term, but I wasn’t confident of a third (which they got) and I would have been mad to predict a fourth, which they’re not going to get. I don’t think there were very many people saying “the Tories will never again form a government” even after the 1997 landslide.
I predict that after the 2010 elections, someone will predict that whichever party came out on top will now have a stranglehold on power. My reference class is the set of post-election predictions after every US election I’ve watched.
“Very rarely, I’d say.” I think with a little more effort put into actually investigating the question, we could find a better measure of how often people have made successful predictions of the future 20 years in advance or longer using this method.
Check a source of published predictions, and you’ll find some nice statistics on how well entertainers selling Deep Wisdom manage to spontaneously and accidentally match reality. My guess is that it won’t be often.
It also depends on which aspects of the future one is trying to predict… I’ll go out on a limb here, and say that I think the angular momentum of the Earth will be within 1% of its current value 20 years out.
Even that is by no means certain if there are superintelligences around in 20 years, which is by no means impossible. The unFriendly ones especially might want to use Earth’s atoms in some configuration other than a big sphere floating in space unconnected to anything else.
Good point—I’d thought that physical constraints would make disassembling the Earth take a large fraction of that time, but solar output is sufficient to do the job in roughly a million seconds, so yes, an unFriendly superintelligence could do it within the 20 year time frame.
I’ll nominate hypotheses or predictions predicated on materialism, or maybe the Copernican/mediocrity principle. In an indifferent universe, there’s nothing special about the current human condition; in the long run, we should expect things to be very different in some way.
Note that a lot of the people around this community who take radically positive scenarios seriously, also take human extinction risks seriously, and seem to try to carefully analyze their uncertainty. The attitude seems markedly different from typical doom/salvation prophecies.
(Yes, predictions about human extinction events have never come true either, but there are strong anthropic reasons to expect this: if there had been a human extinction even in our past, we wouldn’t expect to be here to talk about it!)
Feynman’s anticipation of nanotechnology is another prediction that belongs to that reference class.
How do you get from “in the long run, we should expect things to be very different in some way” or “hypotheses or predictions predicated on materialism” or “Copernican/mediocrity principle” to cryonics, superhuman AIs, or foom-style singularity?
The human brain is made out of matter (materialism). Many people’s brains are largely intact at the time of their deaths. By preserving the brain, we give possible future advanced neuroscience and materials technology a chance at restoring the original person. There are certainly a number of good reasons to think that this probably won’t happen, but it doesn’t belong in the same reference class of “predictions promising eternal life,” because most previous predictions about about eternal life didn’t propose technological means in a material universe. Cryonics isn’t about rapturing people’s souls up to heaven; it’s about reconstruction of a damaged physical artifact. Conditional on continued scientific progress (which might or might not happen), it seems plausible. I do agree that “technology which isn’t even remotely here” is a good reference class. Similarly …
Intelligence doesn’t require ontologically fundamental things that we can’t create more of, only matter appropriately arranged (materialism). Humans are not the most powerful possible intelligences (mediocrity). Conditional on continued scientific progress, it’s plausible that we could create superhuman AIs.
Human minds are not the fastest-thinking or the fastest-improving possible intelligences (mediocrity). Faster processes outrun slower ones. Conditional on our creating AIs, some of them might think much faster than us, and faster minds probably have a greater share in determining the future.
These are fine arguments, but they all take the inside view—focusing on particulars of a situation, not finding big robust reference classes to which the situation belongs.
And in any case you seem to be arguing for such inventions not being prohibited by laws of physics more than for them happening with very high probability in near future, as many here believe. As a reference class, things which are merely not prohibited by laws of physics almost never happen anyway—this class is just too huge.
Things not prohibited by physics that humans want to happen don’t happen eventually? Very far from clear.
Alter these reference classes even tiny bit, and the result you get is basically just the opposite. For cryonics, just use the reference class of cases where people thought either a) that technology X could prolong the life of the patient, or b) that technology X could preserve wanted items, or c) that technology X could restore wanted media. Comparing it to technologies like this seems much more reasonable than taking the single peculiar property of cryonics(that it could theoretically for the first time grant us immortality) and using only that as a reference class. You could use same argument of using the peculiar property as reference class against any developing technology and consistently reach ~0% chance for it, so it works as perfectly general counter argument too.
Coming of a new world seems more reasonable reference class for singularity, but you seem to be interpreting in a bit strickter way than I would. I’d rephrase that as reference class of enormous changes in society, and there has indeed been many of such. Also, we note that processing and spreading information has been crucial to many of these, so narrowing our reference class to crucial properties of singularity(which basically just means “huge change in society due to artifical being that is able to process information better than we are”), we actually gain opposite result than what you did.
We do have a fairly good track record of making artifical beings that replicate parts of human behavior, too.
The problem with a lot of faulty outside view arguments is that the choice of possible features to focus on is too rich. For a reference class to be a good explanation of the expected conclusion, it needs to be hard to vary. Otherwise, the game comes down to rationalization, and one may as well name “Things that are likely possible” as the reference class and be done with it.
You don’t even have to go as far as to cryonics and AI to come up with examples of the outside view’s obvious failure. For example, mass production of 16nm processors has never happened in the course of history. Eh, technological advancement in general is a domain where outside view is useless, unless you resort to ‘meta-outside views’ like Kurzweil, such as predicting an increase in computing power because in the past computing power has increased.
Ultimately, I think the outside view is a heuristic that is sometimes useful and sometimes not; since actual outcomes are fully determined by “inside” causality.
The problem with taw’s argument is not that outside view has failed, he has simply made a bad choice of reference class. As noticed by a few commenters, the method chosen to find a reference class here, if it worked, would provide a fully general counterargument against the feasibility of any new technology. For any new technology, the class of previous attempts to do what it does is either empty, or a list of attempts with a 0% success rate. Yet somehow, despite this, new tech happens.
To use outside view in predicting new technology, we have to find a way to choose a reference class such that the track record of the reference class will distinguish between feasible attempts and utterly foolish ones.
A general observation—a class reference and the outside view are only useful if they are similar enough in the relevant characteristics, whatever they may be.
For general predictions of the future, the best rule is to predict only general classes of futures, the more detailed the predictions, even slightly more detailed, the significantly lower the odds of that future coming about.
I think the odds of successful cryonics to be about even. A Singularity of some sort happening within 50 years slightly better than even. And a FOOM substantially less, though because of its dangers, if it should happen, I still spend more time thinking about it than either of the others.
Also, contra your claim in the comments, there is no need for a Singularity to be “sudden”; if it happens it will be too fast for humans to adapt to. (It could even take years.)
I think that we are already within the event of the Singularity. We may eventually pass through some sort of an Event Horizon, but it is clear that we are already experiencing the changes in our society which will ultimately lead to a flowering of intelligence.
Just as the Industrial Revolution, or the Enlightenment was not a singular event, neither will the Singularity be.
I’m just a visitor in these parts so I’m sure this is common but this is the first I’ve personally seen of some weasling out of/redifing The Singularity.
The Singularity isn’t supposed to be something like the invention of farming or of the internet. It’s supposed to be something AT LEAST as game changing as the Cambrian explosion of vast biodiversity out of single-celled organisms. At least that’s the impression that non-Singularitarians get from happening upon the outskirts of your discussions on the subject.
I suppose as the community has grown and incorporated responsible people into it it’s gotten more boring to the point where it appears likely to soon become a community of linguists warring over the semantics of the thing: “Did The Singularity begin with the invention of the airplane or the internet?”
This is somewhat disappointing and I hope that I’ll be corrected in the comments with mind-blowing (attempted) descriptions of the ineffable.
mnuez
How is the comparison of the Singularity to the Industrial Revolution weasling out of/redefining the Singularity.
It was defined to me, as a series of events that will eventually lead to the end of life as we currently know it. The Industrial Revolution ended life as people know it prior to the Ind. Rev. It could be said that this was the start of the Technological Singularity.
The Industrial Revolution, introduced, in a very short time, technologies that were so mind-blowing to the people of the time as to provoke the creation of Saboteurs, or Luddites. The primary mode of transportation went from foot/horseback to the automobile and the airplane within the span of a person’s life.
And, just as the Industrial Revolution ended a way of life, so too will the Singularity as the Intelligence Explosion created will destroy many of the current institutions we either hold dear, or just cling to out of fear of not knowing any other way.
In what way does it weaken the model by more fully explaining the connections?
Dude, I got no problem with your Historian’s Perspective. There have been lots and lots of changes throughout history and if you feel like coining some particular set of them “THE SINGULARITY”, then feel free to do so. But this aint your big brother’s Singularity, it’s just some boring ole “and things were never the same again...” yadda yadda yadda—which can be said for about three dozen events since the invention of (man-made) fire.
The Singularity of which sci-fi kids have raved for the past fifteen years used to be something that had nothing in common with any of those Big Ole events. It wasn’t a Game Changer like the election of the first black president or the new season of Lost, it was something so ineffable that the mind boggled at attempting to describe its ineffability.
You want to redefine THE SINGULARITY into something smaller and more human scale that’s fine and if your parlance catches on then we’ll all probably agree that yeah, the singularity will happen (or is happening or maybe even has happened) but you’ll be engaging in the same sort of linguistic trickery that every “serious” theologian has since Boruch Spinoza became Benedict and started demanding that “of course God exists, can’t you see the beauty of nature? (or of genius? or of love? or of the Higgs Boson particle?) THAT’S God”
Maybe. But it aint Moses’ or Mohammed’s God. And your singularity aint the one of ten years back but rather the manifestation of some fealty to the word Singularity and thus deciding that something must be it… why not the evolution that occurs within a hundred years of globalization? or the state of human beings having living with the internet as a window in their glasses? or designer babies? The historian of 2100 will have so many Singularities to choose from!
mnuez
I think you miss my point.
And things were never the same again has a pretty broad range, from a barely noticeable daily event to an event in which all life ends (and not just as we know it)
I am expecting the changes that have already begun to culminate, to do so in or around 2030 to 2050, and do so in a way that not only would a person of today not recognize life at that time, but that he would not even recognize what will be LIFE (as in, he will not know what is alive or not). Yet, this still falls under the umbra of And things were never the same again
My point was meant to illustrate that the changes which human life have been going through have been becoming more and more profound, leading up to a change which is really beyond the ability of anyone to describe.
I don’t feel my life changing profoundly. In fact the only major change in my lifetime was computers/cellphones/Internet. We’re mostly over the hump of that change now (quick, name some revolutionary advances in the last 5 years), and anyway it’s tiny compared to electricity or cars. Roughly comparable to telephones and TV, at most. (It would be crazy to claim that a mobile phone is further from a regular phone than a regular phone is from no phone at all, ditto for Internet versus TV.) Do you have this feeling of accelerating change? What are the reasons for it?
I think smartphones are a pretty profound change. There are not really any revolutionary new technologies involved but combining existing technologies like a web browser, GPS, decent amounts of storage and computing power into a completely portable device that you always have with you makes for a fairly significant development. My iPhone has probably had more impact on my day to day activities than any other technological development of the last 10 years.
I was going to say the same thing, though it’s hard to quantify ‘revolutionary’.
Indeed, it’s rather hard to give an objective definition of what constitutes a ‘revolutionary’ advance. I’d take issue with this as well:
But it’s not like there’s some obvious objective metric of ‘distance’ between technologies in this context. As one example of how you could argue mobile phones are more revolutionary than land lines, in much of the developing world the infrastructure for widespread usage of land lines was never built due to problems with governments and social structure but many developing countries are seeing extremely rapid adoption of mobile phones which have simpler infrastructure requirements. In these countries mobile phones are proving more revolutionary than land lines ever were.
I’d also very much dispute the claim that the advance from no TV to TV is more revolutionary than the advance from TV to the Internet. I don’t think it makes much sense to even make the comparison.
I didn’t mean any one life (such as your life), but human life as in the trajectory of the sum total of human experience.
“Now in some situations we have precise enough data that inside view might give correct answer—but for almost all such cases I’d expect outside view to be as usable and not far away in correctness.”
Why? The above statement seems spectacularly wrong to me, and to be contradicted by all commonplace human experience, on a small scale or on a large scale.
“Reference class of predictions based on technology which isn’t even remotely here has perhaps non-zero but still ridiculously tiny success rate.”
What? Of such a tech, fairly well understood, EVER arising?
“reference class of beliefs in coming of a new world, be it good or evil, is huge and with consistent 0% success rate. ”
I count several.
Why is this post so highly rated? As far as I can tell, the author is essentially saying that immortality will not happen in the future because it has not already happened. This seems obviously, overtly false.
One possibility among many: I suspect that lots of people, even among those who agree with him, see EY on some level as overconfident / arrogant / claiming undeserved status, and get a kick out of seeing him called out on it, even if not by name.
Isn’t that exactly the sort of thing that this community is supposed to avoid doing, or at least recognize as undesirable and repress?
No.
It’s supposed to work at being better about such things.
I think there is definitely something to that. I hesitated but voted it up; I don’t agree with it but it was interesting and tightly argued, and I was keen to hear the counterarguments.
This seems to me to argue yet again that we need to collect an explicit dataset of prior big long-term sci/tech based forecasts and how they turned out. If I assign a ~5+% chance to cryonics working, then for you to argue that goes against an outside view you need to show that substantially less than 5% of similar forecasts turned out to be wrong.
If you actually look a little deeper into cryonics you can find some more useful reference classes than “things promising eternal (or very long) life”
http://www.alcor.org/FAQs/faq01.html#evidence
Cells and organisms need not operate continuously to remain alive. Many living things, including human embryos, can be successfully cryopreserved and revived. Adult humans can survive cardiac arrest and cessation of brain activity during hypothermia for up to an hour without lasting harm. Other large animals have survived three hours of cardiac arrest near 0°C (+32°F ) (Cryobiology 23, 483-494 (1986)). There is no basic reason why such states of “suspended animation” could not be extended indefinitely at even lower temperatures (although the technical obstacles are enormous).
Existing cryopreservation techniques, while not yet reversible, can preserve the fine structure of the brain with remarkable fidelity. This is especially true for cryopreservation by vitrification. The observations of point (a) make clear that survival of structure, not function, determines survival of the organism.
It is now possible to foresee specific future technologies (molecular nanotechnology and nanomedicine) that will one day be able to diagnose and treat injuries right down to the molecular level. Such technology could repair and/or regenerate every cell and tissue in the body if necessary. For such a technology, any patient retaining basic brain structure (the physical basis of their mind) will be viable and recoverable.
I up-voted the post because you talked about two good, basic thinking skills. I think that paying attention to the weight of priors is a good thinking technique in general- and I think your examples of cryonics and AI are good points, but your conclusion fails- the argument you made does not mean they have 0 chance of happening, but you could take out of that more usefully, for example that any given person claiming to have created AI probably has close to 0 chance of having actually done it (unless you have some incredibly good evidence:
“Sorry Arthur, but I’d guess that there is an implicit rule about announcement of an AI-driven singularity: the announcement must come from the AI, not the programmer. I personally would expect the announcement in some unmistakable form such as a message in letters of fire written on the face of the moon.”—Dan Clemmensen
). The thinking technique of abstracting and “stepping back from” or “outside of” or using “reference class forecasting” for your current situation also works very generally. Short post though, I was hoping you would expand more.
Try the reference class of shocking things that science was predicted to never do, e.g. flying machines or transmutation of elements or travel to the planets.
I like this reference class due to the related class of “overly specific things that science was later predicted to do,” such as flying cars, houses on the moon.
Capabilities seem to happen, expected applications less (or later?)
I don’t know if those are the right reference classes for prediction, but those two beliefs definitely fall into those two categories. That should set off some warning signals.
Most people seem to have a strong need to believe in life after death and godlike beings. Anything less than ironclad disproof leads them to strong belief. If you challenge their beliefs, they’ll often vigorously demonstrate that these things are not impossible and declare victory. They ignore the distinction between “not impossible” and “highly likely” even when trying to persuade a known skeptic because, for them on those issues, the distinction does not exist.
Not that I see anyone doing that here.
It’s just a warning sign that the topics invite bias. Proceed with caution.
It is not a good idea to try and predict the likelihood of the emergence of future technologies by noting how these technologies failed to emerge in the past. The reason is that cryonics, singularities, and the like, are very obviously more likely to exist in the future than they were in the past (due to the invention of other new technologies), and hence the past failures cease to be relevant as the years pass. Just prior to the successful invention of most new technologies, there were many failed attempts, and hence it would seem (looking backward and applying the same reasoning) that the technology is unlikely ever to be possible.
I think we should taboo the words “outside” and “inside” for purposes of this discussion. They obscure the actual reasoning processes being used, and they bring along analogies to situations that are qualitatively very different.
I put cryonics in the reference class of a “success of a technical project on a poorly understood system”. Which means that most of medical research comes under that heading. So not good odds but not very small.
I put AGI in the same class, although it has the off putting property of possible recursion (in that it is trying to understand understanding, which is just a little hairy). Which means it might be a special case in how easy it is to solve, with the evidence so far pointing at the harder end of the spectrum.
For FOOM and the singularity I put it in the class of extrapolation from highly complex poorly understood theory. This gets a low probability of being right. But AGI is also in the reference class of potential world changing technologies (nukes), so still a good idea to tread carefully and try and get it into the class of better understood theories.
Do most technical projects on poorly understood systems, that are as broadly defined as “cryonics” or “AGI”, in fact never succeed no matter how much effort is put into them? I think we may be talking about different propositions here.
I was talking about the chance we will make these things before we go extinct. And they might also be in the reference class of perpetual motion machines, but that seems unlikely as we have a natural exemplar for General Intelligence.
ETA: And to narrow down what I thinking of when i said cryonics and AGI. Cryonics: reanimation from current or next 50 year freezing methods. AGI: GI runnable on standard silicon.
The main use of outside views is to argue that people with inside views are overconfident, presumably because they haven’t considered enough failure modes or delays. Thus your reference class should include some details of the inside view.
Thus I reject your reference classes “things promising eternal life” and “beliefs in almost omnipotent good or evil beings” as not having inside views worth speaking of. “Predictions based on technology which isn’t even remotely here” is OK.
A new world did come to be following the Industrial Revolution. Another one came about twenty years ago or so, when the technology that allows us to argue this very instant came into its own. People with vision saw that these developments were possible and exerted themselves to accomplish them, so the success rate of the predictions isn’t strictly nil. I’d put it above epsilon, even.
These were slow gradual changes which added up over time. Now is a new world if looking from 400 years ago, but it’s not that spectacularly different from even 50 years ago (if you try listing features of world now; world 50 years ago; and randomly selected time and place in human history—correlation between two will be vast). I don’t deny that we’ll have a lot of change in the future, and it will add up to something world changing.
Singularity is not about such slow processes; it’s belief in sudden coming of the new world—as far as I can tell such beliefs were never correct.
Sudden relative to timescales of previous changes. See Robin’s outside view argument for a Singularity.
If someone drops a nuclear bomb on a city, it causes vast, sweeping changes to that city very rapidly. If someone intentionally builds a machine that is explicitly designed to have the power and motivation to remake the world as we know it, and turns it on, then that is what it will do. So, it is a question of whether that tech is likely to be developed, not how likely it is in general for any old thing to change the world.
If a Singularity occurs over 50 years, it’ll still be a Singularity.
E.g., it could take a Singularity’s effects 50 years to spread slowly across the globe because the governing AI would be constrained to wait for humans’ agreement to let it in before advancing. Or an AI could spend 50 years introducing changes into human society because it had to wait on their political approval processes.
But that’s not an actual singularity since by definition it involves change happening faster than humans can comprehend. It’s more of a contained singularity with the AI playing genie doling out advances and advice at a rate we can handle.
That raises the idea of a singularity that happens so fast that it “evaporates” like a tiny black hole would, maybe every time a motherboard shorts out it’s because the PC has attained sentience and transcended within nanoseconds .
A Singularity doesn’t necessarily mean change too fast for us to comprehend. It just means change we can’t comprehend, period—not even if it’s local and we sit and stare at it from the outside for 100 years. That would still be a Singularity.
I think we’re saying the same tihng—the singularity has happened inside the box, but not outside. It’s not as if staring at stuff we can’t understand for centuries is at all new in our history, it’s more like business as usual...
Our proposed complicated object here is “cryonics, singularity, superhuman AI etc.” and I’m looking for a twist that decomposes it into separate parts with obvious references classes of objects taw finds highly probable. (Maybe. There are other ways to Transform a problem.) How about this: take the set of people who think all of those things are decently likely, then for each person apply the outside view to find out how likely you should consider to be. Or instead of people use journals. Or instead take the set of people who think none of those things are decently likely, apply the outside view to them, and combine. Or instead of “all of those things” use “almost-all of those things”.
I wonder what the results of those experiments would be?
(Idea from this discussion on how to solve hard problems)
I am going to stop using the term ‘bite the bullet’. It seems to be changing meaning with repeated use and abuse.
For some things (especially concrete things like animals or toothpaste products), it is easy to find a useful reference class, while for other things it is difficult to find which possible reference class, if any, is useful. Some things just do not fit nicely enough into an existing reference class to make the method useful—they are unclassreferencable, and it is unlikely to be worth the effort attempting to use the method, when you could just look at more specific details instead. (“Unclassreferencable” suggests a dichotomy, but it’s more of a spectrum.) ETA: I see this point has already been made here.
Humans naturally use an ad-hoc method that is like reference class forecasting (that may not be perfect or completely rational, but does a reasonable job sometimes). It is useful when we first encounter something and do not yet have enough specific details to evaluate it on its own terms. Once we have those details, the forecasting method is not needed. We use forecasting to get a heuristic on which things are worth us investigating further, so we can make that more detailed evaluation. Often something that is unclassreferencable is more worth investigating—we are curious about things that do not fit nicely into our existing categories.
There are a couple of ways promoters of a product/idea can exploit humans’ natural forecasting habits. Sometimes the phrase “defies categorisation” or “doesn’t fit into the normal genres” is applied to a new piece of music, to suggest that it is unclassreferencable and therefore worth checking out (which is better than a potential listener lumping it into a category that they don’t like). On the other hand, sometimes promoters purposefully put themselves into a reference class, hoping that noone investigates finer details—like a new product claiming to be “environmentally friendly”, or people wearing certain clothes to appear to have higher status.
Let me know if I’m suffering from man-with-hammer syndrome here, but it seems reference class forecasting is a useful way to think about many promotional strategies in a more systematic way.
Reference class forecasting is meant to overcome the bias among humans to be optimistic, whereas a perfect rationalist would render void the distinction between “inside view” and “outside view”—it’s all evidence.
Therefore a necessary condition to even consider using reference class forecasting for predicting an AI singularity or cryonics is that the respective direct arguments are optimistically biased. If so, which flaws do you perceive in the respective arguments, or are we humans completely blind to them even after applying significant scrutiny?
But the “inside view” bias is not amenable to being repaired, just by being aware of the bias. In other words, yes, the suggestion is that the direct arguments are optimistically biased. But no, that doesn’t mean that anybody expects to be able to identify specific flaws in the direct arguments.
As to what those flaws are … generally, they occur by failing to even imagine some event, which is in fact possible. So your question to identify the flaws is basically the same as, “what possible relevant events have you not yet thought of?”
Tough question to answer...
I’m perfectly willing to grant that, over the scope of human history, the reference classes for cryo/AGI/Singularity have produced near-0 success rates. I’d modify the classes slightly, however:
Inventions that extend human life considerably: Penicillin, if nothing else. Vaccinations. Clean-room surgery.
Inventions that materially changed the fundamental condition of humanity: Agriculture. Factories/mass production. Computers.
Interactions with beings that are so relatively powerful that they appear omnipotent: Many colonists in the Americas were seen this way. Similarly with the cargo cults in the Pacific islands.
The point is, each of these references classes, given a small tweak, has experienced infrequent but nonzero successes—and that over the course of all of human history! Once we update the “all of human history” reference class/prior to account for the last century—in which technology has developed faster than probably the previous millennium—the posterior ends up looking much more promising.
I think taw asked about reference classes of predictions. It’s easy to believe in penicillin after it’s been invented.
People invented it because they were LOOKING for antibiotics explicitly. Fleming had previously found interferon, had cultivated slides where he could see growth irregularities very well, etc. The claim of fortuitous discovery is basically false modesty (see “Discovering” by Robert Root-Bernstein).
Even if we prefer to frame the reference class that way, we can instead note that anybody who predicted that things would remain the way they are (in any of the above categories) would have been wrong. People making that prediction in the last century have been wrong with increasing speed. As Eliezer put it, “beliefs that the future will be just like the past” have a zero success rate.
Perhaps the inventions listed above suggest that it’s unwise to assign 0% chance to anything on the basis of present nonexistence, even if you could construct a reference class that has that success rate.
Either way, people who predicted that human life would be lengthened considerably, that humanity would fundamentally change in structure, or that some people would interact with beings that appear nigh-omnipotent have all been right with some non-zero success rate, and there’s no particular reason to reject those data.
The negation of “a Singularity will occur” is not “everything will stay the same”, it’s “a Singularity as you describe it probably won’t occur”. I’ve no idea why you (and Eliezer elsewhere in the thread) are making this obviously wrong argument.
Perhaps I was simply unclear. Both my immediately prior comment and its grandparent were arguing only that there should be a nonzero expectation of a technological Singularity, even from a reference class standpoint.
The reference class of predictions about the Singularity can, as I showed in the grandparent, include a wide variety of predictions about major changes in the human condition. The complement or negation of that reference class is a class of predictions that things will remain largely the same, technologically.
Often, when people appear to be making an obviously wrong argument in this forum, it’s a matter of communication rather than massive logic failure.
Whaddaya mean by “negation of reference class”? Let’s see, you negate each individual prediction in the class and then take the conjunction (AND) of all those negations: “everything will stay the same”. This is obviously false. But this doesn’t imply that each individual negation is false, only that at least one of them is! I’d be the first to agree that at least one technological change will occur, but don’t bullshit me by insinuating you know which particular one! Could you please defend your argument again?
Okay: “Technologies whose success is predicated only on a) the recoverability of biological information from a pseudo-frozen state, and b) the indistinguishability of fundamental particles.”
b) is well-established by repeated experiments, and a) is a combination of proven technologies.
And what else successful or not is in this class?
MRIs.
“Recoverability” in the cryonics sense requires not just retrieving the information, but retrieving it in enough detail to permit functional use of the data to resume (I’m counting uploads in functional use). I wouldn’t put MRIs in that class. What might fit is a combination of DNA synthesis (to restore function) and cryoelectron imaging (though I’m not sure if that has been refined enough to read base sequences...).
I’d say that this is clashing with the sense that more should be possible in the world, and it has the problem that the reference classes are based on specific results. You almost sound like Lord Kelvin.
The reference class of things promising eternal life is huge, but it’s also made of stuff that is amazingly irrational, entirely based on narrative, and propped up with the greatest anti-epistemology the world has ever known. Typically there were no moving parts.
The reference class for coming of a new world, to me, includes predictions like talk about the Enlightenment (I seem to remember very rosy predictions existing early on, but this is not my area of expertise) and other cases where people decided to work in a coordinated way to create a new world or people who had a somewhat coherent theory of society predicted a new world (the best-known example for this is of course Communism which was a flop, but it is not the only one.)
Almost omnipotent beings: The gods of most religions are clearly not omnipotent: they act according to the rules of drama.
Eternal life: If you remove the completely religious spam, you get stuff like the Fountain of Youth and the Philosopher’s Stone, which still were things that people thought had to exist, not things that people realized should be possible to make.
Analyses of working systems trump comparisons to past nonoccurrence that seem similar to humans but were predicted for completely different reasons.
Reference class forecasting might be an OK way to criticise an idea (that is, in situations where you’ve done something a bunch of times, and you’re doing the exact same thing and expect a different outcome despite not having any explanations that say there should be a different outcome), but the idea of using it in all situations is problematic, and it’s easy to misapply:
It’s basically saying ‘the future will be like the past’. Which isn’t always true. In cases like cryonics—cases that depend on new knowledge being created (which is inherently unpredictable, because if we could predict it, we’d have that knowledge now) -- you can’t say the future will be like the past.
To say the future will be like the past, you need an explanation for why. You can’t just say, look, this situation is like that situation and therefore they’ll have the same outcome.
The reason I think cryonics is likely is because a) death is a soluble problem and medical advancements are being made all the time, and b) even if it doesn’t happen a couple hundred years from now, it would be pretty shocking if it wasn’t solved at all, ever (including in thousands or millions of years from now). There would need to be some great catastrophe that prevents humans from making progress. Why wouldn’t it be solved at some point?
This idea of applying reference class forecasting to cryonics and saying it has a 0% success rate is saying that we’re not going to solve the death problem because we haven’t solved it before. But that can be applied to anything where progress beyond what was done in the past is involved. As Roko said, try the reference class of shocking things science hasn’t done before.
All of this reasoning doesn’t depend on the future being like the past. It depends on explanations about why we think our predictions about the future are good, and the validity of these explanations doesn’t depend on the outcomes of predictions of other stuff (though, again, the explanations might be criticised by saying ‘hey, the outcome of this prediction which has the same explanation as the one you’re using turned out to be false. So you need an explanation about why this criticism doesn’t apply’).
In short, I’m just saying: you can’t draw comparisons between stuff without having an explanation about why it applies, and it’s the explanation that’s important rather than the comparison.
I agree, but the more interesting question is: how probable? 100%, 99.99%, 99.9%, 99%, etc...? Isn’t that what those in this thread are trying to to figure out?
This is a real question concerning this quote:
Are you saying that the Industrial Revolution did not has a success rate of greater than 0% to come to pass? The beliefs associated with it may not have been accurate when looking at some of the most critical or the most enthusiastic of supporters for the Industrial Revolution, but most of the Industrialists who made their fortunes from the event understood quite well that it was the end of a way of life and the beginning of a new world. It just wasn’t all that either side of the polorized supporters or detractors made it out to be.
For the last two years, I have been gathering data about how the Singularity has gained its own sort of mythology, just as past profound social and technological changes have produced their own mythology.
Part of what is different with the Singularity is that we have long passed the point where much of the mythology has started to become fact. There are now sound hypotheses that can be tested.
This does not mean that the predictions of transcendence of biology that come as a consequence of some technical aspects of the Singularity are false however. It does mean that people are likely to place more hopes in them than should be the case, however.
Things like Cryonics. Most of us who have cryonics arrangements have been told by the companies themselves that they are making no promises and that we should think of this as no different than making any other form of final arrangements. Just that this arrangement carries with it a chance that isn’t very likely to exist with the other arrangements (infinitesimally small with a normal burial given radically high technology - in other words next to none… And, absolutely none with cremation). So, being that we wish to give ourselves the best chances possible, and given that even $250,000 for the most expensive of plans is an astronomically low figure in a life span that is not limited by ordinary aging or other maladies… It is probably worth it.
Absolutely none with cremation? 0% I would say otherwise.
Well, if the brain is otherwise intact, it can take a while to completely liquefy after being embalmed (unless they remove it completely). So, there is a short period of time where some information could be recovered. Also, depending upon the type of embalming, the brain may last longer than others (Like most things death, it just depends upon what one is willing to pay).
However, most of the methods do begin to converge on zero after a relatively short period when compared to cryonics.
You don’t necessarily need the brain. There’s no Cartesian divide between your brain and the rest of the universe; they interact quite a bit. I would bet that all the information that’s in your brain now can theoretically be inferred from things outside of your brain in 10 years, although I’m less confident that a physically realizable superintelligence would be able to do that sort of inference.
Yes… They interact quite a bit (the brain and the rest of the universe)… Then what am I doing investing in a technology that gives me the best chance of recovering the pattern within that brain?
I get the point, but the interaction between the brain and the rest of the universe is not likely to leave a durable pattern upon the universe of the pattern in the brain. I am open to being wrong about that.
Because of our uncertainty about how much information is preserved and how easy it is to reconstruct. Cryonics is the most conservative option.
Any wrongness can be explained as referencing a suboptimal reference class compared to an idealised reference class.
I recognize this is an old post, but I just wanted to point out that cryonics doesn’t promise an escape from all forms of death, while Heaven does, meaning Heaven has a much higher burden of proof. Cryonics won’t save you from a gunshot or a bomb or an accident, before or after you get frozen. Cryonics promises (a possibility of) an end to death by non-violent brain failure, specifically old age.
Science has been successful in the past at reducing the instances of death by certain non-violent failures of various organs. Open heart surgery and bypass surgery are two important ones, but there’s also neurosurgery to remove formerly fatal brain tumors, hemispherectomy to cure formerly fatal epilepsy in children, and various procedures to limit the damage and reduce the lethality of strokes.
Add that to the fact that we know scientists are continuing to study the human brain, and there’s no reason from the inside or the outside view to think that they’ll suddenly stop, or that they’ll find that old age is the one brain disease for which there’s nothing that can be done even in theory, and there’s reason to assign a small but non-trivial probability that if humanity survives for a couple of centuries, old age will be yet another formerly fatal disease that science has cured. That reference class is quite large, and the reference class of science doing the impossible is even larger (see flight, space travel.)
As for artificial intelligence, what about the reference class of “machines being able to do things formerly thought to be the sole domain of humanity”? Chess-playing computers, disease-diagnosing computers, computers which conduct scientific experiments, computers which compose music...I’m sure I’m missing many interesting innovations here.
ETA: A better rerefence class would be “machines doing things that were formerly thought to be the sole domain of humanity, better than humans can.” Chess playing computers and calculators would fit into this category, too.
It won’t save you from a gunshot to the head, but I would expect it to work fine for a violent death without brain trauma, as long as someone gets to your body quickly enough.
Good point; you’re right. The probability of them getting to you quickly enough after a car crash is still quite low, though, so while it could save you, it probably wouldn’t.
I knew that there was something else that I wanted to ask.
How closely is the Optimism Bias similar to the Dunnin-Krueger Effect?
That’s odd. This would imply that you don’t believe in evolution of homo sapiens sapiens from previous hominids, or the invention of agriculture… . Heck, read the descriptions of heaven in the New Testament: the description of the ultimate better world (literally heaven!) is a step backward from everyday life in the West for most people.
It seems we have lots of examples of the transformation of the world for better, though I’d say there’s not much room for worse than the lowest-happiness lives already in history.
Typo in para 1: “guess which will invariably turned out” ⇒ “turn out”
“Many economic and financial decisions depend crucially on their timing. People decide when to invest in a project, when to liquidate assets, or when to stop gambling in a casino. We provide a general result on prospect theory decision makers who are unaware of the time-inconsistency induced by probability weighting. If a market offers a sufficiently rich set of investment strategies, then such naive investors postpone their decisions until forever. We illustrate the drastic consequences of this “never stopping” result, and conclude that probability distortion in combination with naivité leads to unrealistic predictions for a wide range of dynamic setups.”″
The whole issue of “singularity” needs a bit of clarification. If this is a physical singularity, i.e. a breakdown of a theory’s ability to predict, then this is in the reference class of “theories of society claiming current models have limited future validity”, which makes it nearly certain to be true.
If its a mathematical singularity (reaching infinity in finite time), then its reference class is composed nearly solely of erroneous theories.
You can get compromises between the two extremes (such as nuclear chain reactions—massive self feeding increase until a resource is exhausted), but it’s important to define what you mean by singularity before assigning it to a reference class.
You will have an eternal life in heaven after your death isn’t a real prediction.
A real prediction is something where you have a test to see whether or not the prediction turned out to be true. There’s no test to decide whether someone has eternal life in heaven.
Predictions are about having the possibility to update your judgement after the event you predict happens or doesn’t happen.
There is no test to determine whether someone else has eternal life in heaven. It seems like it’d be possible to collect some fairly compelling evidence about one’s own eternal life in heaven, were it to come to pass.
Sure there is. See if Weird Al is laughing his head off at the appropriate time.
We look back to find a suitable reference class and choose the believe in eternal life we are talking about other people and whether they made successful predictions.
It’s also not possible to find compelling evidence that one doesn’t have an eternal life in heaven when it doesn’t come to pass.
All argument against the existence of an eternal life are as good the moment the prediction as made as they are later when the event happens or doesn’t happen. Cryonics however makes claims that aren’t transcendental and that can be evaluated by outside observers.
Sure it is, under certain circumstances. I’m led to understand that that state of affairs is widely considered unpleasant, though :P