The point is that all object-level arguments for and against these scenarios, even if you call them “probability estimates”, are ultimately based on intuitions which are difficult to formalize or quantify.
The scenarios hypthesized by the Singularitarians are extreme, both in the magnitude of the effect they are claimed to entail, and in the the highly conjunctive object-level arguments that are used to argue for them. Common sense rationality tells us that “extraordinary claims demand exceptional evidence”. How do we evaluate whether the intuitions of these people constitute “exceptional evidence”?
So we take the “outside view” and try to meta-reason on these arguments and the people making them: Can we trust their informal intuitions, or do they show any signs of classical biases?
If intelligence explosion/cryonics/all things singularitarian were ideas radically different from any common meme, then the answer to these questions would be likely no: these ideas would appear counterintuitive at a gut level to most normally rational people, possibly in the same way quantum mechanics and Einstenian relativity appear conterituitive. If domain-level experts, after studing the field for years, recalibrated their intuitions and claimed that these scenarios were likely, then we should probably listen to them. We should not just accept their claims based on authority, of course: even the experts can subject to groupthink and other biases (cough...economists...cough), but as far as the “outside view” is concerned, we would at least have plausibly excluded the availability bias.
What we observe, instead, is that singulariarians ideas strongly pattern-match to Christian millenarianism and similar religious beliefs, mixed with popular scifi tropes (cryonics, AI revolt, etc.). They certainly originated, or at least were strongly influenced by these memes, and therefore the intuitions of the people arguing for them are likely “contaminated” via the availability heuristic by these memes. More specifically, if singulariarians ideas make intuitive sense to you, you can’t even trust your own intuitions since they are likely to be “contaminated” as well.
Add the fact that the strength of these intuitions seems to decrease rather than increase with domain-expertise, suggesting that the Dunning–Kruger effect is also at work, then the “outside view” tells us to be wary.
Of course, it is possible to believe correct things even when they are likely to be the subject of biases, or even to believe correct things that many people believe for the wrong reason, but in order to make a case for these beliefs, you need some airtight arguments with strong evidence. As far as I can tell, MIRI/FHI/other Singularitarians have provided no such arguments.
They certainly originated, or at least were strongly influenced by these memes
Originated? Citation needed, seriously.
What we observe, instead, is that singulariarians ideas strongly pattern-match to Christian millenarianism and similar religious beliefs, mixed with popular scifi tropes (cryonics, AI revolt, etc.).
Not very strong pattern match. In Christian millenarianism, you have the good being separated from the bad. And this is considered good, even with all of the horror. Also, the humans don’t cause the good and bad things. It’s God. Also, it’s prophesied and certain to happen in a particular way.
In a typical FOOM scenario, everyone shares their fate regardless of any personal beliefs. And if it’s bad for people, it’s considered bad—no excuses for any horror. And humans create whatever it is that makes the rest happen, so that ‘no excuses’ is really salient. There are many ways it could work out, there is no roadmap. This produces pretty much diametrically opposite attitude - ‘be really careful and don’t trust that things are going to work out okay’.
So the pattern-match fails on closer inspection. “We are heading towards something dangerous but possibly awesome if we do it just right” just isn’t like “God is going to destroy the unbelievers and elevate the righteous, you just need to believe!” in any relevant way.
I’ve heard the singularity-pattern-matches-religious-tropes argument before and hadn’t given it much thought, but I find your analysis that the argument is wrong to be convincing, at least for the futurism I’m acquainted with. I’m less sure that it’s true of Kurzweil’s brand of futurism.
Citation for what? We can’t be sure of what was going on in the heads of the Singularitarians when they came up with these ideas, but it seems obvious that people like Kurzweil, Hanson, Bostrom, Yudkowsky, etc., were quite familiar with Christian millenarianism and scifi tropes.
In a typical FOOM scenario, everyone shares their fate regardless of any personal beliefs.
Well, those who died...pardon...”deanimated” without signing up for cryonics are out of luck, robot Jesus will not rise them from their icy graves.
Several variants of the Singularity allow different outcomes for different people, see Hanson’s Malthusian EM society for instance. Yudkowsky’s CEV-FAI is (was?) supposed to impose a global morality based on some sort of “extrapolated” average of people’s moralities. Some people may not like it. And don’t let’s get started with the Basilisk...
Anyway, Singularitarianism is not Christianity, so if you look at a sufficient level of detail you can certainly find some differences. But it seems clear to me that they are related.
Citation for what? We can’t be sure of what was going on in the heads of the Singularitarians when they came up with these ideas, but it seems obvious that people like Kurzweil, Hanson, Bostrom, Yudkowsky, etc., were quite familiar with Christian millenarianism and scifi tropes.
If that’s all you’ve got, then you totally made the idea up. Why would a bunch of atheists be positively inclined towards a story that resembled something they rejected more or less directly?
Well, those who died...pardon...”deanimated” without signing up for cryonics are out of luck, robot Jesus will not rise them from their icy graves.
This is still really really different.
A) Only a tiny fraction of people who expect a singularity are into cryo. It’s not the same belief.
B) Even if there is no singularity at all, cryo could pay off. They’re separate things causally as well. You don’t need a Robot Jesus to reanimate or upload someone, just amazingly awesome medical technology.
C) Everyone still alive at the time experiences the consequences, good or bad, so that’s kind of moot if the singularity is to be expected any time vaguely soon. Outside of the basilisk, whether you brought it about or not doesn’t have an impact—and taking the basilisk seriously would make one an extreme outlier.
D) If it turns out that existing cryo tech doesn’t work, then the people who did sign up are SOL too, as is anyone who did sign up for cryo but didn’t get frozen for whatever reason. These are very real risks taken seriously by almost everyone who does support cryo.
E) The only moral judgement here is on people who don’t let others be frozen… and see C. There’s no element of karma here, no justice. Just, do ‘the smart thing’ or don’t (FYI, I am not signed up for cryo).
allow different outcomes for different people, see Hanson’s Malthusian EM society for instance.
that looks like the same outcome for everyone to me. The ‘survivors’ are ground down to pure economics by Moloch. Plus, you seem to be overinterpreting my ‘same outcome’ statement. Outcome of the singularity, not personal outcome.
Yudkowsky’s CEV-FAI is (was?) supposed to impose a global morality based on some sort of “extrapolated” average of people’s moralities. Some people may not like it.
Whoa there. It would itself act in accordance with said morality. If said morality is pluralistic, which seems very likely considering that it’s built on two layers of indirection, then it does not end up imposing a global morality on anyone else.
Anyway, Singularitarianism is not Christianity, so if you look at a sufficient level of detail you can certainly find some differences. But it seems clear to me that they are related.
I didn’t exactly have to probe deeply, and considering that the philosophical effect of the belief is diametrically opposite, I certainly don’t think I went too deeply. It feels shoehorned in to me.
If that’s all you’ve got, then you totally made the idea up.
What would a citation for it look like?
Why would a bunch of atheists be positively inclined towards a story that resembled something they rejected more or less directly?
I don’t know, maybe because they were raised in highly religious families (Hanson and Muehlhauser in particular, Yudkowsky mentions an Ortodox Jewish upbringing but I don’t know how much religious his parents were, I don’t know about the other folks) and they are scared that they realized they live in a world “Beyond the Reach of God”?
Anyway, we don’t have to psychoanalyze them. Similarity of beliefs and familiarity with the hypothetical source is evidence of relatedness.
I didn’t exactly have to probe deeply, and considering that the philosophical effect of the belief is diametrically opposite, I certainly don’t think I went too deeply. It feels shoehorned in to me.
You could compare different Christian denominations and find different “philosophical effect of the belief” (e.g. the five “Solae” of early Protestantism vs Catholic theology), but this doesn’t mean that they are unrelated.
I don’t know if this is a relevant data point, but I was raised in an atheist communist family, and I still like the idea that people could live forever (or at least much longer than today) and I think the world could be significantly improved.
It seems to me one doesn’t need a religious background for this, only to overcome some learned helplessness and status-quo fatalism. Okay, the religion (and also communism) already provide you a story of a radical change in the future, so they kinda open the door… but I think that living in the 20th/21st century and watching the world around you change dramatically should allow one to extrapolate even if they wouldn’t hear such ideas before.
Anything they wrote or said that might lead you to believe that there is actually this connection, beyond pure supposition?
‘Beyond the Reach of God’ is at least in the right vein, though there are two teensy weensy difficulties (i.e. it’s completely useless to your argument). First, the fellow who wrote it was never Christian, so Christian Millenarianism wouldn’t be ingrained into him. Second, ‘Beyond the Reach of God’ doesn’t aim itself back into religion and less still Revelations-style religion. ‘Let’s build a tool that makes life fair’ is completely crosswise to any religious teaching.
You could compare different Christian denominations and find different “philosophical effect of the belief” (e.g. the five “Solae” of early Protestantism vs Catholic theology), but this doesn’t mean that they are unrelated.
Yes, and they are obviously related due to all being substantially the same thing—heck, they share their NAME. Having opposite philosophical conclusions is a good reason to cut off a particular line of reasoning that someone generated an idea by pattern-matching to an existing narrative, in the absence of any other evidence that they did so besides a mediocre pattern-match. I didn’t claim it was a general disproof.
When you have two ideas that are: called differently, they claim no common origin, one came from revelation while the other from reasoning presented publicly, one claims certainty while the other claims uncertainty, one is a moral claim while the other is a factual claim, one is supernatural and the other is materialistic...
and,
the connections between them are that they both claim to accomplish several highly desirable things like: raising the dead and keeping people alive forever, and doing so for all the world...
the high desirability of these things mean that multiple people would aim to accomplish them, so aiming to accomplish them does not indicate shared origin!
First, the fellow who wrote it was never Christian, so Christian Millenarianism wouldn’t be ingrained into him.
He was born and raised in a predominantly Protestant Christian society, where these beliefs are widespread. And, by the way, apocalyptic beliefs existed in all religions and cultures, including Judaism (Christianity was originally a messianic and arguably apocalyptic Jewish cult).
Second, ‘Beyond the Reach of God’ doesn’t aim itself back into religion and less still Revelations-style religion. ‘Let’s build a tool that makes life fair’ is completely crosswise to any religious teaching.
‘Salvation through good works’ comes to mind. More generally, various doomsday cults have beliefs involving the cult members having to perform specific actions in order to trigger the Apocalypse or make sure that it unfolds in the intended way.
I don’t want to push the pattern-matching too far. ‘Singularity is a cult’ has been already debated at nausem here, and is probably and exagerated position. It sufficies to say that singularitarian and religious ideas are probably salient to the same kind of psychological mechanisms and heuristics, some innate and some acquired or reinforced by culture.
As I said in the my original comment, this doesn’t necessarily imply that singularitarian beliefs are wrong, but it strongly suggests that we should be wary for availability heuristic/priviledging the hypothesis biases when we evaluate them.
When you have two ideas that are: called differently, they claim no common origin,
‘Beryon the reach of God’ seems evidence to the contrary.
one came from revelation while the other from reasoning presented publicly,
Fair enough.
one claims certainty while the other claims uncertainty,
Does it? I’m under the impression that singularitarians believe that, barring some major catastrophe, the Singularity is pretty much inevitable.
one is a moral claim while the other is a factual claim,
No. Both are factual claims about events that are expected to happen in the future. They may be more or less falsifiable, depending on how much the authors commit to specific deadlines.
one is supernatural and the other is materialistic...
Between outside view, Dunning-Krueger, and rhetorical questions about biases with no attempt to provide answers to them, you’ve built a schema for arguing against anything at all without the burden of bringing evidence to the table. I guess evidence would be the dreaded inside view, although that doesn’t stop you demanding it from the other side. Bostrom’s recent book? The arguments in the Sequences? No, that doesn’t count, it’s not exceptional enough, and besides, Dunning-Krueger means no-one ever knows they’re wrong, and (contd. p.94).
Maybe a better name for “outside view” would be “spectator’s view”, or “armchair view”.
Between outside view, Dunning-Krueger, and rhetorical questions about biases with no attempt to provide answers to them, you’ve built a schema for arguing against anything at all without the burden of bringing evidence to the table.
I don’t think so. Try to use this scheme to argue against, say, quantum mechanics.
Bostrom’s recent book? The arguments in the Sequences? No, that doesn’t count, it’s not exceptional enough
I haven’t read Bostrom’s recent book. Given that he’s a guy who takes the simulation hypothesis seriously, I’d don’t expect much valuable insight from him, but I could be wrong of course. If you think he has some substatially novel strong argument, feel free to point it out to me.
The Sequences discuss cryonics using weak arguments (e.g. the hard drive analogy). AFAIK they don’t focus on intelligence explosion. I think that Yudkowsky/Muehlhauser/MIRI argument for intelligence explosion is Good’s argument, variously expanded and articulated in the Yudkowsky/Hanson debate. Needless to say, I don’t find this line of argument very convincing. Again, feel free to refer me to any strong argument that I might be missing.
The point is that all object-level arguments for and against these scenarios, even if you call them “probability estimates”, are ultimately based on intuitions which are difficult to formalize or quantify.
The scenarios hypthesized by the Singularitarians are extreme, both in the magnitude of the effect they are claimed to entail, and in the the highly conjunctive object-level arguments that are used to argue for them. Common sense rationality tells us that “extraordinary claims demand exceptional evidence”. How do we evaluate whether the intuitions of these people constitute “exceptional evidence”?
So we take the “outside view” and try to meta-reason on these arguments and the people making them:
Can we trust their informal intuitions, or do they show any signs of classical biases?
Are these people privileging the hypothesis? Are they drawing their intuitions from the availability heuristic?
If intelligence explosion/cryonics/all things singularitarian were ideas radically different from any common meme, then the answer to these questions would be likely no: these ideas would appear counterintuitive at a gut level to most normally rational people, possibly in the same way quantum mechanics and Einstenian relativity appear conterituitive.
If domain-level experts, after studing the field for years, recalibrated their intuitions and claimed that these scenarios were likely, then we should probably listen to them.
We should not just accept their claims based on authority, of course: even the experts can subject to groupthink and other biases (cough...economists...cough), but as far as the “outside view” is concerned, we would at least have plausibly excluded the availability bias.
What we observe, instead, is that singulariarians ideas strongly pattern-match to Christian millenarianism and similar religious beliefs, mixed with popular scifi tropes (cryonics, AI revolt, etc.). They certainly originated, or at least were strongly influenced by these memes, and therefore the intuitions of the people arguing for them are likely “contaminated” via the availability heuristic by these memes.
More specifically, if singulariarians ideas make intuitive sense to you, you can’t even trust your own intuitions since they are likely to be “contaminated” as well.
Add the fact that the strength of these intuitions seems to decrease rather than increase with domain-expertise, suggesting that the Dunning–Kruger effect is also at work, then the “outside view” tells us to be wary.
Of course, it is possible to believe correct things even when they are likely to be the subject of biases, or even to believe correct things that many people believe for the wrong reason, but in order to make a case for these beliefs, you need some airtight arguments with strong evidence.
As far as I can tell, MIRI/FHI/other Singularitarians have provided no such arguments.
Originated? Citation needed, seriously.
Not very strong pattern match. In Christian millenarianism, you have the good being separated from the bad. And this is considered good, even with all of the horror. Also, the humans don’t cause the good and bad things. It’s God. Also, it’s prophesied and certain to happen in a particular way.
In a typical FOOM scenario, everyone shares their fate regardless of any personal beliefs. And if it’s bad for people, it’s considered bad—no excuses for any horror. And humans create whatever it is that makes the rest happen, so that ‘no excuses’ is really salient. There are many ways it could work out, there is no roadmap. This produces pretty much diametrically opposite attitude - ‘be really careful and don’t trust that things are going to work out okay’.
So the pattern-match fails on closer inspection. “We are heading towards something dangerous but possibly awesome if we do it just right” just isn’t like “God is going to destroy the unbelievers and elevate the righteous, you just need to believe!” in any relevant way.
I’ve heard the singularity-pattern-matches-religious-tropes argument before and hadn’t given it much thought, but I find your analysis that the argument is wrong to be convincing, at least for the futurism I’m acquainted with. I’m less sure that it’s true of Kurzweil’s brand of futurism.
Citation for what? We can’t be sure of what was going on in the heads of the Singularitarians when they came up with these ideas, but it seems obvious that people like Kurzweil, Hanson, Bostrom, Yudkowsky, etc., were quite familiar with Christian millenarianism and scifi tropes.
Well, those who died...pardon...”deanimated” without signing up for cryonics are out of luck, robot Jesus will not rise them from their icy graves.
Several variants of the Singularity allow different outcomes for different people, see Hanson’s Malthusian EM society for instance.
Yudkowsky’s CEV-FAI is (was?) supposed to impose a global morality based on some sort of “extrapolated” average of people’s moralities. Some people may not like it. And don’t let’s get started with the Basilisk...
Anyway, Singularitarianism is not Christianity, so if you look at a sufficient level of detail you can certainly find some differences. But it seems clear to me that they are related.
If that’s all you’ve got, then you totally made the idea up. Why would a bunch of atheists be positively inclined towards a story that resembled something they rejected more or less directly?
This is still really really different.
A) Only a tiny fraction of people who expect a singularity are into cryo. It’s not the same belief.
B) Even if there is no singularity at all, cryo could pay off. They’re separate things causally as well. You don’t need a Robot Jesus to reanimate or upload someone, just amazingly awesome medical technology.
C) Everyone still alive at the time experiences the consequences, good or bad, so that’s kind of moot if the singularity is to be expected any time vaguely soon. Outside of the basilisk, whether you brought it about or not doesn’t have an impact—and taking the basilisk seriously would make one an extreme outlier.
D) If it turns out that existing cryo tech doesn’t work, then the people who did sign up are SOL too, as is anyone who did sign up for cryo but didn’t get frozen for whatever reason. These are very real risks taken seriously by almost everyone who does support cryo.
E) The only moral judgement here is on people who don’t let others be frozen… and see C. There’s no element of karma here, no justice. Just, do ‘the smart thing’ or don’t (FYI, I am not signed up for cryo).
that looks like the same outcome for everyone to me. The ‘survivors’ are ground down to pure economics by Moloch. Plus, you seem to be overinterpreting my ‘same outcome’ statement. Outcome of the singularity, not personal outcome.
Whoa there. It would itself act in accordance with said morality. If said morality is pluralistic, which seems very likely considering that it’s built on two layers of indirection, then it does not end up imposing a global morality on anyone else.
I didn’t exactly have to probe deeply, and considering that the philosophical effect of the belief is diametrically opposite, I certainly don’t think I went too deeply. It feels shoehorned in to me.
What would a citation for it look like?
I don’t know, maybe because they were raised in highly religious families (Hanson and Muehlhauser in particular, Yudkowsky mentions an Ortodox Jewish upbringing but I don’t know how much religious his parents were, I don’t know about the other folks) and they are scared that they realized they live in a world “Beyond the Reach of God”?
Anyway, we don’t have to psychoanalyze them. Similarity of beliefs and familiarity with the hypothetical source is evidence of relatedness.
You could compare different Christian denominations and find different “philosophical effect of the belief” (e.g. the five “Solae” of early Protestantism vs Catholic theology), but this doesn’t mean that they are unrelated.
I don’t know if this is a relevant data point, but I was raised in an atheist communist family, and I still like the idea that people could live forever (or at least much longer than today) and I think the world could be significantly improved.
It seems to me one doesn’t need a religious background for this, only to overcome some learned helplessness and status-quo fatalism. Okay, the religion (and also communism) already provide you a story of a radical change in the future, so they kinda open the door… but I think that living in the 20th/21st century and watching the world around you change dramatically should allow one to extrapolate even if they wouldn’t hear such ideas before.
Anything they wrote or said that might lead you to believe that there is actually this connection, beyond pure supposition?
‘Beyond the Reach of God’ is at least in the right vein, though there are two teensy weensy difficulties (i.e. it’s completely useless to your argument). First, the fellow who wrote it was never Christian, so Christian Millenarianism wouldn’t be ingrained into him. Second, ‘Beyond the Reach of God’ doesn’t aim itself back into religion and less still Revelations-style religion. ‘Let’s build a tool that makes life fair’ is completely crosswise to any religious teaching.
Yes, and they are obviously related due to all being substantially the same thing—heck, they share their NAME. Having opposite philosophical conclusions is a good reason to cut off a particular line of reasoning that someone generated an idea by pattern-matching to an existing narrative, in the absence of any other evidence that they did so besides a mediocre pattern-match. I didn’t claim it was a general disproof.
When you have two ideas that are: called differently, they claim no common origin, one came from revelation while the other from reasoning presented publicly, one claims certainty while the other claims uncertainty, one is a moral claim while the other is a factual claim, one is supernatural and the other is materialistic...
and,
the connections between them are that they both claim to accomplish several highly desirable things like: raising the dead and keeping people alive forever, and doing so for all the world...
the high desirability of these things mean that multiple people would aim to accomplish them, so aiming to accomplish them does not indicate shared origin!
He was born and raised in a predominantly Protestant Christian society, where these beliefs are widespread. And, by the way, apocalyptic beliefs existed in all religions and cultures, including Judaism (Christianity was originally a messianic and arguably apocalyptic Jewish cult).
‘Salvation through good works’ comes to mind.
More generally, various doomsday cults have beliefs involving the cult members having to perform specific actions in order to trigger the Apocalypse or make sure that it unfolds in the intended way.
I don’t want to push the pattern-matching too far. ‘Singularity is a cult’ has been already debated at nausem here, and is probably and exagerated position.
It sufficies to say that singularitarian and religious ideas are probably salient to the same kind of psychological mechanisms and heuristics, some innate and some acquired or reinforced by culture.
As I said in the my original comment, this doesn’t necessarily imply that singularitarian beliefs are wrong, but it strongly suggests that we should be wary for availability heuristic/priviledging the hypothesis biases when we evaluate them.
‘Beryon the reach of God’ seems evidence to the contrary.
Fair enough.
Does it? I’m under the impression that singularitarians believe that, barring some major catastrophe, the Singularity is pretty much inevitable.
No. Both are factual claims about events that are expected to happen in the future. They may be more or less falsifiable, depending on how much the authors commit to specific deadlines.
Any sufficiently advanced technology is indistinguishable from magic.
Between outside view, Dunning-Krueger, and rhetorical questions about biases with no attempt to provide answers to them, you’ve built a schema for arguing against anything at all without the burden of bringing evidence to the table. I guess evidence would be the dreaded inside view, although that doesn’t stop you demanding it from the other side. Bostrom’s recent book? The arguments in the Sequences? No, that doesn’t count, it’s not exceptional enough, and besides, Dunning-Krueger means no-one ever knows they’re wrong, and (contd. p.94).
Maybe a better name for “outside view” would be “spectator’s view”, or “armchair view”.
I don’t think so. Try to use this scheme to argue against, say, quantum mechanics.
I haven’t read Bostrom’s recent book. Given that he’s a guy who takes the simulation hypothesis seriously, I’d don’t expect much valuable insight from him, but I could be wrong of course. If you think he has some substatially novel strong argument, feel free to point it out to me.
The Sequences discuss cryonics using weak arguments (e.g. the hard drive analogy). AFAIK they don’t focus on intelligence explosion.
I think that Yudkowsky/Muehlhauser/MIRI argument for intelligence explosion is Good’s argument, variously expanded and articulated in the Yudkowsky/Hanson debate. Needless to say, I don’t find this line of argument very convincing.
Again, feel free to refer me to any strong argument that I might be missing.