there seems to be something about this whole endeavour (including but not limited to Eliezer’s writings) that makes people think !!!CRAZY!!! and !!!DOOMSDAY CULT!!!,
Yes, and it’s called “pattern completion”, the same effect that makes people think “Singularitarians believe that only people who believe in the Singularity will be saved”.
I must know, have you actually encountered people who literally think that? I’m really hoping that’s a comical exaggeration, but I guess I should not overestimate human brains.
“It’s basically a modern version of a religious belief system and there’s no purpose to it, like why, why must we have another one of these things … you get an afterlife out of it because you’ll be on the inside track when the singularity happens—it’s got all the trappings of a religion, it’s the same thing.”—Jaron here.
Yeah, “people who think Singularitarians think that” is what I meant.
I’ve actually met exactly one something-like-a-Singularitarian who did think something-like-that — it was at one of the Bay Area meetups, so you may or may not have talked to him, but anyway, he was saying that only people who invent or otherwise contribute to the development of Singularity technology would “deserve” to actually benefit from a positive Singularity. He wasn’t exactly saying he believed that the nonbelievers would be left to languish when cometh the Singularity, but he seemed to be saying that they should.
Also, I think he tried to convert me to Objectivism.
Unless we get a hard-takeoff singleton, which is admittedly the SIAI expectation, there will be massive inequality, with a few very wealthy beings and average income barely above subsistence. Thus saith Robin Hanson, and I’ve never seen any significant holes poked in that thesis.
Robin Hanson seems to be assuming that human preferences will, in general, remain in their current ranges. This strikes me as unlikely in the face of technological self-modification.
I’ve never gotten that impression. What I’ve gotten is that evolutionary pressures will, in the long term, still exist—even if technological self-modification leads to a population that’s 99.99% satisfied to live within strict resource consumption limits, unless they harshly punish defectors the .01% with a drive for replication or expansion will overwhelm the rest within a few millenia, until the average income is back to subsistence. This doesn’t depend on human preferences, just the laws of physics and natural selection.
What evolutionary pressures? Even making the incredible assumption that we will continue to use sequences of genes as a large part of our identities, what’s to stop a singleton of some variety from eliminating drives for replication or expansion entirely?
I feel uncomfortable speculating about a post-machine-intelligence future even to this extent; this is not a realm in which I am confident about any proposition. Consequently, I view all confident conclusions with great skepticism.
You’re still not getting the breadth and generality of Hanson’s model. To use recent LW terminology, it’s an anti-prediction.
It doesn’t matter whether agents perpetuate their strategies by DNA mixing, binary fission, cellular automata, or cave paintings. Even if all but a tiny minority of posthumans self-modify not to want growth or replication, the few that don’t will soon dominate the light-cone. A singleton, like I’d mentioned, is one way to avert this. Universal extinction and harsh, immediate punishment of expansion-oriented agents are the only others I see.
You (or Robin, I suppose) are just describing a many-agent prisoner’s dilemma. If TDT agents beat the dilemma by cooperating with other TDT agents, then any agents that started out with a different decision theory will have long since self-modified to use TDT.
Alternately, if there is no best decision theoretical solution to the prisoner’s dilemma, then we probably don’t need to worry about surviving to face this problem.
Now, there’s a generalized answer. It even covers the possibility of meeting aliens—finding TDT is a necessary condition for reaching the stars. Harsh punishment of inconsiderate expanders might still be required, but there could be a stable equilibrium without ever actually inflicting that punishment. That’s a new perspective for me, thanks!
To be even more general, suppose that there is at least one thing X that is universally necessary for effective superintelligences to function. X might be knowledge of the second law of thermodynamics, TDT, a computational substrate of some variety, or any number of other things. There are probably very many such X’s, many of which are entirely non-obvious to any entity that is not itself a superintelligence (i.e. us). Furthermore, there may be at least one thing Y that is universally incompatible with effective superintelligence. Y might be an absolute belief in the existence of the deity Thor or desiring only to solve the Halting Problem using a TM-equivalent. For the Hansonian model to hold, all X’s and no Y’s must be compatible with the desire and ability to expand and/or replicate.
This argument is generally why I dislike speculating about superintelligences. It is impossible for ordinary humans to have exhaustive (or even useful, partial) knowledge of all X and all Y. The set of all things Y in particular may not even be enumerable.
We cannot be sure that there are difficulties beyond our comprehension but we are certainly able to assign probabilities to that hypothesis based on what we know. I would be justifiably shocked if something we could call a super-intelligence couldn’t be formed based on knowledge that is accessible to us, even if the process of putting the seed of a super-intelligence together is beyond us.
Humans aren’t even remotely optimised for generalised intelligence, it’s just a trick we picked up to, crudely speaking, get laid. There is no reason that a intelligence of the form “human thinking minus the parts that suck and a bit more of the parts that don’t suck” couldn’t be created using the knowledge available to us and that is something we can easily place a high probability on. Then you run the hardware at more than 60hz.
Oh, I agree. We just don’t know what self-modifications will be necessary to achieve non-speed-based optimizations.
To put it another way, if superintelligences are competing with each other and self-modifying in order to do so, predictions about the qualities those superintelligences will possess are all but worthless.
To put it another way, if superintelligences are competing with each other and self-modifying in order to do so, predictions about the qualities those superintelligences will possess are all but worthless.
What evolutionary pressures? Even making the incredible assumption that we will continue to use sequences of genes as a large part of our identities, what’s to stop a singleton of some variety from eliminating drives for replication or expansion entirely?
Your point is spot on. Competition can not be relied on to produce adaptation if someone wins the competition once and for all.
I wasn’t trying to make an especially long-term prediction:
“We saw the first millionaire in 1716, the first billionaire in 1916 - and can expect the first trillionaire within the next decade—probably before 2016.”
The greatest peak net worth in recorded history, adjusted for inflation, was Bill Gates’ $101 billion, which was ten years ago. No one since then has come close. A 10-fold increase in <6 years strikes me as unlikely.
In any case, your extrapolated curve points to 2116, not 2016.
I am increasingly convinced that your comments on this topic are made in less than good faith.
Yes, the last figure looks wrong to me too—hopefully I will revisit the issue.
Update 2011-05-30: yes: 2016 was a simple math mistake! I have updated the text I was quoting from to read “later this century”.
Anyway, the huge modern wealth inequalities are well established—and projecting them into the future doesn’t seem especially controversial. Today’s winners in IT are hugely rich—and tomorrow’s winners may well be even richer. People thinking something like they will “be on the inside track when the singularity happens” would not be very surprising.
Anyway, the huge modern wealth inequalities are well established—and projecting them into the future doesn’t seem especially controversial.
Projecting anything into a future with non-human intelligences is controversial. You have made an incredibly large assumption without realizing it. Please update.
If you actually want your questions answered, then money is society’s representation of utility—and I think there will probably be something like that in the future—no matter how far out you go. What you may not find further out is “people”. However, I wasn’t talking about any of that, really. I just meant while there are still money and people with bank accounts around.
We have been building intelligent machines for many decades now. If you are talking about something that doesn’t yet exist, I think you would be well advised to find another term for it.
Do otherwise and YOU and YOUR LOVED ONES will suffer ETERNAL OBLIVION.
This one isn’t right, and is a big difference between religion and threats like extinction-level asteroids or AI disasters: one can free-ride if that’s one’s practice in collective action problems.
...though—curiously—there are some differences between the two pages (count the words in the first sentence). [update: this difference was apparently due to the page being simultaneously cached and updated.]
Comparisons with The Rapture, are insightful, IMHO. I see no good reason to deny them.
It turns out that ETERNAL OBLIVION is too weak. The community now has the doctrine of ETERNAL DAMNATION. For details, see here.
People need to stop being coy. If you know a difference, just spit it out, don’t force people to jump through meaningless hoops like “count the words in the first sentence”.
Downvoted for wasting people’s time with coyness because of a false belief caused by a cache issue.
Uh, no it doesn’t, and in fact this appears to be an actual lie (EDIT: Nope, cache issue) rather than the RotN page being changed since you checked it.
Maybe Adelene meant that “now” is an untruth, in that it implies a change occurring between the timestamp of the comment you reply to and the reply itself. A truthful observation would “RotN has always redirected to a page that, etc.”
“A page that is extremely similar to X” implies “a page that is not X”, assuming normal use of the English language. The rapture of the nerds page has always led to the technological singularity page, and the technological singularity page is not a page that is not the technological singularity page.
Reading the relevant comment with the strictest possible definitions of all the terms, it’s technically correct, but the way that the comment is structured implies an interpretation other than the one that is true, and it could easily have been structured in a way that wouldn’t imply such an interpretation.
Huh. Put like that, I guess I understand now, but it seems as though your refutation could also have been more clear on that point. Thanks for the disentangling!
The pages are subtly different—in the way I described in detail in my original comment. Count the words in the first sentence—the one starting: “A technological singularity is...” to see the difference.
My guess is that a Wikipedia “redirect” allows for a prefix header to be prepended, which would explain the difference.
All four versions of the page—redirect and not, secure and not—start with the same two sentences for me: “A technological singularity is a hypothetical event. It will occur if technological progress becomes so rapid and the growth of super-human intelligence so great that the future (after the singularity) becomes qualitatively different and harder to predict.”
Much time could have been saved had you copied and pasted the two diverging sentences rather than asking people to count the words. For indeed there was a recent change in the page, and if this was the source of the difference, then had you provided the exact sentences then the cause could have been determined quickly, avoiding a lot of back and forth.
Copying and pasting from a comparison, the slightly earlier version is:
A ‴technological singularity‴ is a hypothetical event occurring when technological progress becomes so rapid and the growth of super-human intelligence is so great that the future after the singularity becomes qualitatively different and harder to predict.
The slightly more recent version is:
A ‴technological singularity‴ is a hypothetical event.
The rest of the earlier sentence was split off into separate sentences.
It was the key and only evidence in an accusation of lying, which is a pretty damn serious accusation that should neither be taken lightly nor made lightly. The evidence was small but the role it played in the accusation made it important. If your point is that the accuser should have held their tongue so to speak, you may be right. But they didn’t, and so the question took on importance.
It was the key and only evidence in an accusation of lying, which is a pretty damn serious accusation that should neither be taken lightly nor made lightly. The evidence was small but the role it played in the accusation made it important. If your point is that the accuser should have held their tongue so to speak, you may be right. But they didn’t, and so the question took on importance.
Yes, responding to accusations of lying is important. Making them, not so much. :)
Since it redirects, the relevant history page is the technological singularity history page. Namely, this one. And there was indeed a recent change to the first sentence. See for example this comparison.
Do otherwise and YOU and YOUR LOVED ONES will suffer ETERNAL OBLIVION.
This one isn’t right, and is a big difference between religion and threats like extinction-level asteroids or AI disasters: one can free-ride if that’s one’s practice in collective action problems.
It is true that, this time around there are probabilities attached to some of the outcomes—but the basic END OF THE WORLD rescue pitch remains essentially intact.
I note that some have observed that worse fates may await those who get their priorities wrong at the critical stage.
This whole “outside view” methodology, where you insist on arguing from ignorance even where you have additional knowledge, is insane (outside of avoiding the specific biases such as planning fallacy induced by making additional detail available to your mind, where you indirectly benefit from basing your decision on ignorance).
Perhaps compare a doomsday cult with a drug addict: The outside view (e.g. of family and practitioners) looks one way—while the inside view often looks pretty different.
That’s not what “inside view” means. The way you seem to intend it, it admittedly is a useless tool, but having it as an option in the false dichotomy together with reference class tennis is transparently disingenuous (or stupid).
You seem to be thinking about reference class forecasting. In that particular case, I just meant looking from the outside—but the basic idea is much the same. Doomsday organisations have a pattern. The SIAI isn’t an ordinary one—but it shares many of the same basic traits with them.
Given that a certain fraction of comments are foolish, you can expect that an even larger fraction of votes are foolish, because there are fewer controls on votes (e.g. a voter doesn’t risk his reputation while a commenter does).
Which is why Slashdot (which was a lot more worthwhile in the past than it is now) introduced voting on how other people vote (which Slashdot called metamoderation). Worked pretty well: the decline of Slashdot was mild and gradual compared to the decline of almost every other social site that ever reached Slashdot’s level of quality.
Hmm—I didn’t think of that. Maybe deathbed repentance is similar as well—in that it offers sinners a shot at eternal bliss in return for public endorsement—and maybe a slice of the will.
Yes, and it’s called “pattern completion”, the same effect that makes people think “Singularitarians believe that only people who believe in the Singularity will be saved”.
This is discussed in Imaginary Positions.
I must know, have you actually encountered people who literally think that? I’m really hoping that’s a comical exaggeration, but I guess I should not overestimate human brains.
“It’s basically a modern version of a religious belief system and there’s no purpose to it, like why, why must we have another one of these things … you get an afterlife out of it because you’ll be on the inside track when the singularity happens—it’s got all the trappings of a religion, it’s the same thing.”—Jaron here.
I’ve encountered people who think Singularitarians think that, never any actual Singularitarians who think that.
Yeah, “people who think Singularitarians think that” is what I meant.
I’ve actually met exactly one something-like-a-Singularitarian who did think something-like-that — it was at one of the Bay Area meetups, so you may or may not have talked to him, but anyway, he was saying that only people who invent or otherwise contribute to the development of Singularity technology would “deserve” to actually benefit from a positive Singularity. He wasn’t exactly saying he believed that the nonbelievers would be left to languish when cometh the Singularity, but he seemed to be saying that they should.
Also, I think he tried to convert me to Objectivism.
Technological progress has increased weath inequality a great deal so far.
Machine intelligence probably has the potential to result in enormous weath inequality.
How, in a post-AGI world, would you define wealth? Computational resources? Matter?
I don’t think there’s any foundation for speculation on this topic at this time.
Unless we get a hard-takeoff singleton, which is admittedly the SIAI expectation, there will be massive inequality, with a few very wealthy beings and average income barely above subsistence. Thus saith Robin Hanson, and I’ve never seen any significant holes poked in that thesis.
Robin Hanson seems to be assuming that human preferences will, in general, remain in their current ranges. This strikes me as unlikely in the face of technological self-modification.
I’ve never gotten that impression. What I’ve gotten is that evolutionary pressures will, in the long term, still exist—even if technological self-modification leads to a population that’s 99.99% satisfied to live within strict resource consumption limits, unless they harshly punish defectors the .01% with a drive for replication or expansion will overwhelm the rest within a few millenia, until the average income is back to subsistence. This doesn’t depend on human preferences, just the laws of physics and natural selection.
What evolutionary pressures? Even making the incredible assumption that we will continue to use sequences of genes as a large part of our identities, what’s to stop a singleton of some variety from eliminating drives for replication or expansion entirely?
I feel uncomfortable speculating about a post-machine-intelligence future even to this extent; this is not a realm in which I am confident about any proposition. Consequently, I view all confident conclusions with great skepticism.
You’re still not getting the breadth and generality of Hanson’s model. To use recent LW terminology, it’s an anti-prediction.
It doesn’t matter whether agents perpetuate their strategies by DNA mixing, binary fission, cellular automata, or cave paintings. Even if all but a tiny minority of posthumans self-modify not to want growth or replication, the few that don’t will soon dominate the light-cone. A singleton, like I’d mentioned, is one way to avert this. Universal extinction and harsh, immediate punishment of expansion-oriented agents are the only others I see.
You (or Robin, I suppose) are just describing a many-agent prisoner’s dilemma. If TDT agents beat the dilemma by cooperating with other TDT agents, then any agents that started out with a different decision theory will have long since self-modified to use TDT.
Alternately, if there is no best decision theoretical solution to the prisoner’s dilemma, then we probably don’t need to worry about surviving to face this problem.
Now, there’s a generalized answer. It even covers the possibility of meeting aliens—finding TDT is a necessary condition for reaching the stars. Harsh punishment of inconsiderate expanders might still be required, but there could be a stable equilibrium without ever actually inflicting that punishment. That’s a new perspective for me, thanks!
To be even more general, suppose that there is at least one thing X that is universally necessary for effective superintelligences to function. X might be knowledge of the second law of thermodynamics, TDT, a computational substrate of some variety, or any number of other things. There are probably very many such X’s, many of which are entirely non-obvious to any entity that is not itself a superintelligence (i.e. us). Furthermore, there may be at least one thing Y that is universally incompatible with effective superintelligence. Y might be an absolute belief in the existence of the deity Thor or desiring only to solve the Halting Problem using a TM-equivalent. For the Hansonian model to hold, all X’s and no Y’s must be compatible with the desire and ability to expand and/or replicate.
This argument is generally why I dislike speculating about superintelligences. It is impossible for ordinary humans to have exhaustive (or even useful, partial) knowledge of all X and all Y. The set of all things Y in particular may not even be enumerable.
We cannot be sure that there are difficulties beyond our comprehension but we are certainly able to assign probabilities to that hypothesis based on what we know. I would be justifiably shocked if something we could call a super-intelligence couldn’t be formed based on knowledge that is accessible to us, even if the process of putting the seed of a super-intelligence together is beyond us.
Humans aren’t even remotely optimised for generalised intelligence, it’s just a trick we picked up to, crudely speaking, get laid. There is no reason that a intelligence of the form “human thinking minus the parts that suck and a bit more of the parts that don’t suck” couldn’t be created using the knowledge available to us and that is something we can easily place a high probability on. Then you run the hardware at more than 60hz.
Oh, I agree. We just don’t know what self-modifications will be necessary to achieve non-speed-based optimizations.
To put it another way, if superintelligences are competing with each other and self-modifying in order to do so, predictions about the qualities those superintelligences will possess are all but worthless.
On this I totally agree!
Your point is spot on. Competition can not be relied on to produce adaptation if someone wins the competition once and for all.
Control, owned by preferences.
I wasn’t trying to make an especially long-term prediction:
“We saw the first millionaire in 1716, the first billionaire in 1916 - and can expect the first trillionaire within the next decade—probably before 2016.”
Inflation.
The richest person on earth currently has a net worth of $53.5 billion.
The greatest peak net worth in recorded history, adjusted for inflation, was Bill Gates’ $101 billion, which was ten years ago. No one since then has come close. A 10-fold increase in <6 years strikes me as unlikely.
In any case, your extrapolated curve points to 2116, not 2016.
I am increasingly convinced that your comments on this topic are made in less than good faith.
Yes, the last figure looks wrong to me too—hopefully I will revisit the issue.
Update 2011-05-30: yes: 2016 was a simple math mistake! I have updated the text I was quoting from to read “later this century”.
Anyway, the huge modern wealth inequalities are well established—and projecting them into the future doesn’t seem especially controversial. Today’s winners in IT are hugely rich—and tomorrow’s winners may well be even richer. People thinking something like they will “be on the inside track when the singularity happens” would not be very surprising.
Projecting anything into a future with non-human intelligences is controversial. You have made an incredibly large assumption without realizing it. Please update.
If you actually want your questions answered, then money is society’s representation of utility—and I think there will probably be something like that in the future—no matter how far out you go. What you may not find further out is “people”. However, I wasn’t talking about any of that, really. I just meant while there are still money and people with bank accounts around.
A few levels up, you said,
My dispute is with the notion that people with bank accounts and machine intelligence will coexist for a non-trivial amount of time.
We have been building intelligent machines for many decades now. If you are talking about something that doesn’t yet exist, I think you would be well advised to find another term for it.
Apologies; I assumed you were using “machine intelligence” as a synonym for AI, as wikipedia does.
Machine intelligence *is—more-or-less—a synonym for artificial intelligence.
Neither term carries the implication of human-level intelligence.
We don’t really have a good canonical term for “AI or upload”.
What about the recent “forbidden topic”? Surely that is a prime example of this kind of thing.
I know people who believe the reverse. That only people get fucked by the Singularity who believe in the Singularity.
(Well, not exactly, but more would hint at forbidden knowledge.)
ETA
Whoops timtyler was faster than me getting at that point.
The outside view of the pitch:
DOOM! - and SOON!
GIVE US ALL YOUR MONEY;
We’ll SAVE THE WORLD; you’ll LIVE FOREVER in HEAVEN;
Do otherwise and YOU and YOUR LOVED ONES will suffer ETERNAL OBLIVION!
Maybe there are some bits missing—but they don’t appear to be critical components of the pattern.
Indeed, this time there are some extra features not invented by those who went before—e.g.:
We can even send you to HEAVEN if you DIE a sinner—IF you PAY MORE MONEY to our partner organisation.
This one isn’t right, and is a big difference between religion and threats like extinction-level asteroids or AI disasters: one can free-ride if that’s one’s practice in collective action problems.
Also: Rapture of the Nerds, Not
It’s now official!
http://en.wikipedia.org/wiki/Rapture_of_the_Nerds
...now leads to a page that is extremely similar to:
http://en.wikipedia.org/wiki/Technological_singularity
...though—curiously—there are some differences between the two pages (count the words in the first sentence). [update: this difference was apparently due to the page being simultaneously cached and updated.]
Comparisons with The Rapture, are insightful, IMHO. I see no good reason to deny them.
It turns out that ETERNAL OBLIVION is too weak. The community now has the doctrine of ETERNAL DAMNATION. For details, see here.
People need to stop being coy. If you know a difference, just spit it out, don’t force people to jump through meaningless hoops like “count the words in the first sentence”.
Downvoted for wasting people’s time with coyness because of a false belief caused by a cache issue.
Uh, no it doesn’t, and in fact this appears to be an actual lie (EDIT: Nope, cache issue) rather than the RotN page being changed since you checked it.
Before you start flinging accusations around, perhaps check, reconsider—or get a second opinion?
To clarify, for me, http://en.wikipedia.org/wiki/Rapture_of_the_Nerds still gives me:
Maybe Adelene meant that “now” is an untruth, in that it implies a change occurring between the timestamp of the comment you reply to and the reply itself. A truthful observation would “RotN has always redirected to a page that, etc.”
The implication that you refer to is based on a simple misunderstanding of my comment—and does not represent a “lie” on my part.
Harumpf.
Wait, did you really mean “no, the page has always redirected there” instead of “no, the page does not, in fact, redirect there”?
“A page that is extremely similar to X” implies “a page that is not X”, assuming normal use of the English language. The rapture of the nerds page has always led to the technological singularity page, and the technological singularity page is not a page that is not the technological singularity page.
Reading the relevant comment with the strictest possible definitions of all the terms, it’s technically correct, but the way that the comment is structured implies an interpretation other than the one that is true, and it could easily have been structured in a way that wouldn’t imply such an interpretation.
Huh. Put like that, I guess I understand now, but it seems as though your refutation could also have been more clear on that point. Thanks for the disentangling!
The pages are subtly different—in the way I described in detail in my original comment. Count the words in the first sentence—the one starting: “A technological singularity is...” to see the difference.
My guess is that a Wikipedia “redirect” allows for a prefix header to be prepended, which would explain the difference.
All four versions of the page—redirect and not, secure and not—start with the same two sentences for me: “A technological singularity is a hypothetical event. It will occur if technological progress becomes so rapid and the growth of super-human intelligence so great that the future (after the singularity) becomes qualitatively different and harder to predict.”
I suspect you have a cache issue.
That seems likely. I used http://hidemyass.com/proxy/ - and it gives a more consistent picture.
Much time could have been saved had you copied and pasted the two diverging sentences rather than asking people to count the words. For indeed there was a recent change in the page, and if this was the source of the difference, then had you provided the exact sentences then the cause could have been determined quickly, avoiding a lot of back and forth.
Copying and pasting from a comparison, the slightly earlier version is:
The slightly more recent version is:
The rest of the earlier sentence was split off into separate sentences.
Not that I am necessarily one to talk but much time could have been saved if nobody argued about such an irrelevant technicality. ;)
It was the key and only evidence in an accusation of lying, which is a pretty damn serious accusation that should neither be taken lightly nor made lightly. The evidence was small but the role it played in the accusation made it important. If your point is that the accuser should have held their tongue so to speak, you may be right. But they didn’t, and so the question took on importance.
Yes, responding to accusations of lying is important. Making them, not so much. :)
*nods* *edits ancestral comment*
Adelene, you are still being very discourteous!
I recommend that you calm down, try to be polite—and go a bit easier in the future on the baseless accusations and recriminations.
Since it redirects, the relevant history page is the technological singularity history page. Namely, this one. And there was indeed a recent change to the first sentence. See for example this comparison.
I’m seeing the redirect on the non-secure version.
I’m seeing the same thing as timtyler.
It is true that, this time around there are probabilities attached to some of the outcomes—but the basic END OF THE WORLD rescue pitch remains essentially intact.
I note that some have observed that worse fates may await those who get their priorities wrong at the critical stage.
I don’t understand why downvote this. It does sound like an accurate representation of the outside view.
This whole “outside view” methodology, where you insist on arguing from ignorance even where you have additional knowledge, is insane (outside of avoiding the specific biases such as planning fallacy induced by making additional detail available to your mind, where you indirectly benefit from basing your decision on ignorance).
In many cases outside view, and in particular reference class tennis, is a form of filtering the evidence, and thus “not technically” lying, a tool of anti-epistemology and dark arts, fit for deceiving yourself and others.
Perhaps compare a doomsday cult with a drug addict:
The outside view (e.g. of family and practitioners) looks one way—while the inside view often looks pretty different.
That’s not what “inside view” means. The way you seem to intend it, it admittedly is a useless tool, but having it as an option in the false dichotomy together with reference class tennis is transparently disingenuous (or stupid).
You seem to be thinking about reference class forecasting. In that particular case, I just meant looking from the outside—but the basic idea is much the same. Doomsday organisations have a pattern. The SIAI isn’t an ordinary one—but it shares many of the same basic traits with them.
We all already know about this pattern match. Its reiteration is boring and detracts from the conversation.
If this particular critique has been made more clearly elsewhere, perhaps let me know, and I will happily link to there in the future.
Update 2011-05-30: There’s now this recent article: The “Rapture” and the “Singularity” Have Much in Common—which makes a rather similar point.
It may have been downvoted for the caps.
Given that a certain fraction of comments are foolish, you can expect that an even larger fraction of votes are foolish, because there are fewer controls on votes (e.g. a voter doesn’t risk his reputation while a commenter does).
Which is why Slashdot (which was a lot more worthwhile in the past than it is now) introduced voting on how other people vote (which Slashdot called metamoderation). Worked pretty well: the decline of Slashdot was mild and gradual compared to the decline of almost every other social site that ever reached Slashdot’s level of quality.
Yes: votes should probably not be anonymous—and on “various other” social networking sites, they are not.
Metafilter, for one. It is hard for an online community to avoid becoming worthless, but Metafilter has avoided that for 10 years.
Perhaps downvoted for suggesting that the salvation-for-cash meme is a modern one. I upvoted, though.
Hmm—I didn’t think of that. Maybe deathbed repentance is similar as well—in that it offers sinners a shot at eternal bliss in return for public endorsement—and maybe a slice of the will.
We all already know about this pattern match. Reiterating it is boring and detracts from the conversation, and I downvote any such comment I see.
Pattern completion isnt always wrong.