It always annoys me when people try to evaluate ideas from their social context rather than their content. It may or may not be true that transhumanism is a “secular outgrowth of Christian eschatology” or “essentially an argument for intelligent design”, but whether it is or not you should still be able to evaluate it as a prediction about the future based on our knowledge of today. It’s not like AIs which should work according to the laws of physics are suddenly going to crumble to dust if they’re made by people of the wrong religion.
I think there’s a rule-of-thumby reading of this that makes a little bit more sense. It’s still prejudiced, though.
A lot of religions have a narrative that ends in true believers being saved from death and pain and after that people aren’t going to struggle over petty issues like scarcity of goods and things. I run into transhumanists every so often who have bolted these ideas onto their narratives. According to some of these people, the robots are going to try hard to end suffering and poverty, and they’re going to make sure most of the humans will live forever. In practice, that goal is dubious from a thermodynamics perspective and if it wasn’t, some of our smarter robots are currently doing high-frequency trading and winning ad revenue for Google employees. That alone has probably increased net human suffering—and they’re not even superintelligent
I imagine some transhumanism fans must have good reasons to put these things in the narrative, but I think it’s extremely worth pointing out that these are ideas humans love aesthetically. If it’s true, great for us, but it’s a very pretty version of the truth. Even if I’m wrong, I’m skeptical of people who try to make definite assertions about what superintelligences will do, because if we knew what superintelligences would do then we wouldn’t need superintelligences. It would really surprise me if it looked just like one of our salvation narratives.
(obligatory nitpick disclaimer: a superintelligence can be surprising in some domains and predictable in others, but I don’t think this defeats my point, because for the conditions of these peoples’ narrative to be met, we need the superintelligence to do things we wouldn’t have thought of in most of the domains relevant to creating a utopia)
This argument notably holds true of FAI / control theory efforts. Proponents of FAI asset that heaven-on-Earth utopian futures are not inevitable outcomes, but rather low probability possibilities they must work towards. It still seems overtly religious and weird to those of us who are not convinced that utopian outcomes are even possible / logically consistent.
If you’re not convinced that utopian outcomes are even possible, isn’t that completely compatible with the claim that utopian futures are not inevitable and low-probability?
If you are not convinced that heaven is even possible, isn’t that completely compatible with Jesus saying that “narrow is the way, which leadeth unto life, and few there be that find it”?
This reminds me how some people notice that superintelligent AI is just another version of Golem… but the same people fail to notice that the ordinary computers around us are already just another version of Golem.
Students of popular science [...] are always insisting that Christianity and Buddhism are very much alike [...] The reasons were of two kinds: resemblances that meant nothing because they were common to all humanity, and resemblances which were not resemblances at all. The author solemnly explained that the two creeds were alike in things in which all creeds are alike, or else he described them as alike in some point in which they are quite obviously different. [...] it was gravely urged that [Christ and Buddha], by a singular coincidence, both had to do with the washing of feet. You might as well say that it was a remarkable coincidence that they both had feet to wash.
Is there actually a version of the Golem tale where AI-risk is a theme? I had a look once and I couldn’t actually find a version where the Golem fastidiously follows its instructions beyond their intended meaning. Perhaps people are just confusing it with The Sorcerer’s Apprentice?
Until I actually looked into this so was I. In my case I think it’s Terry Pratchett’s fault. In Feet of Clay he describes Golems as being prone to continue with tasks forever unless told to stop.
Let us call this precise, instruction-following genie a Golem Genie. (A golem is a
creature from Jewish folklore that would in some stories do exactly as told [Idel 1990],
often with unintended consequences, for example polishing a dish until it is as thin as
paper [Pratchett 1996].)
(The “Idel” reference goes to Idel, Moshe. 1990. Golem: Jewish Magical and Mystical Traditions on the Artificial Anthropoid. SUNY Series
in Judaica. Albany: State University of New York Press.)
It always annoys me when people try to evaluate ideas from their social context rather than their content.
But are you really evaluating the content of transhumanism from outside your social context? Most transhumanists are humanists, and thus can trace their philosophical lineage back through the Enlightenment, the Protestant Reformation, Catholic monks translating Aramaic texts into Latin, Zoroastrians and Muslims translating Greek texts into Aramaic, and Hellenistic post-Socratic philosophers writing their ideas down in reaction to pre-Socratic ideas (and this is just where the paper trail ends). All of that context has helped shaped modern humanism, and through that context humanists have notions of what they consider epistemologically sound and what values they support. These influence how humanists evaluate the content of transhumanism.
At best we might say that because transhumanism was developed by humanists, the humanist interpretation of transhumanism is privileged because it gives perspective on the origins of the ideas, yet that doesn’t mean we can’t find other contexts in which to make sense of transhumanism. To deny them, or even just be annoyed by them, is to exert pressure against the very process that generated transhumanism in the first place: successive reinterpretation and expansion of ideas that have their origins in pre-Socratic Hellenism.
There is no way to consider transhumanism, or any idea, outside of a context; to do so is to blind oneself to the lens through which one sees the world.
It always annoys me when people try to evaluate ideas from their social context rather than their content.
Contains a grammatical ambiguity; the first “their” could refer to the people or the ideas. I meant it to refer to the ideas. I’m not asking people to stop using their own social norms when they judge ideas. I am saying that the society from which an idea originated is irrelevant to judging the truth of that idea. (At least once you’ve fully understood what the idea is. Before that you might need to understand its context in order to resolve ambiguities in the description of the idea.)
So I’m not claiming that I’m not biased by my cultural heritage (although of course I aspire to be unbiased), I’m just saying that transhumanism shouldn’t be attacked or defended based on its heritage.
It always annoys me when people try to evaluate ideas from their social context rather than their content. It may or may not be true that transhumanism is a “secular outgrowth of Christian eschatology” or “essentially an argument for intelligent design”, but whether it is or not you should still be able to evaluate it as a prediction about the future based on our knowledge of today. It’s not like AIs which should work according to the laws of physics are suddenly going to crumble to dust if they’re made by people of the wrong religion.
I think there’s a rule-of-thumby reading of this that makes a little bit more sense. It’s still prejudiced, though.
A lot of religions have a narrative that ends in true believers being saved from death and pain and after that people aren’t going to struggle over petty issues like scarcity of goods and things. I run into transhumanists every so often who have bolted these ideas onto their narratives. According to some of these people, the robots are going to try hard to end suffering and poverty, and they’re going to make sure most of the humans will live forever. In practice, that goal is dubious from a thermodynamics perspective and if it wasn’t, some of our smarter robots are currently doing high-frequency trading and winning ad revenue for Google employees. That alone has probably increased net human suffering—and they’re not even superintelligent
I imagine some transhumanism fans must have good reasons to put these things in the narrative, but I think it’s extremely worth pointing out that these are ideas humans love aesthetically. If it’s true, great for us, but it’s a very pretty version of the truth. Even if I’m wrong, I’m skeptical of people who try to make definite assertions about what superintelligences will do, because if we knew what superintelligences would do then we wouldn’t need superintelligences. It would really surprise me if it looked just like one of our salvation narratives.
(obligatory nitpick disclaimer: a superintelligence can be surprising in some domains and predictable in others, but I don’t think this defeats my point, because for the conditions of these peoples’ narrative to be met, we need the superintelligence to do things we wouldn’t have thought of in most of the domains relevant to creating a utopia)
This argument notably holds true of FAI / control theory efforts. Proponents of FAI asset that heaven-on-Earth utopian futures are not inevitable outcomes, but rather low probability possibilities they must work towards. It still seems overtly religious and weird to those of us who are not convinced that utopian outcomes are even possible / logically consistent.
If you’re not convinced that utopian outcomes are even possible, isn’t that completely compatible with the claim that utopian futures are not inevitable and low-probability?
Let’s try this on religion:
If you are not convinced that heaven is even possible, isn’t that completely compatible with Jesus saying that “narrow is the way, which leadeth unto life, and few there be that find it”?
Ummm… not quite.
“low probability possibilities they must work towards”
It’s weird to devote your life to something that is impossible / logically inconsistent.
This reminds me how some people notice that superintelligent AI is just another version of Golem… but the same people fail to notice that the ordinary computers around us are already just another version of Golem.
Which further reminds me of Chesterton writing:
Is there actually a version of the Golem tale where AI-risk is a theme? I had a look once and I couldn’t actually find a version where the Golem fastidiously follows its instructions beyond their intended meaning. Perhaps people are just confusing it with The Sorcerer’s Apprentice?
Quite possibly; in which case I would also belong to the set of confused people.
Until I actually looked into this so was I. In my case I think it’s Terry Pratchett’s fault. In Feet of Clay he describes Golems as being prone to continue with tasks forever unless told to stop.
From the MIRI paper “Intelligence Explosion and Machine Ethics”:
(The “Idel” reference goes to Idel, Moshe. 1990. Golem: Jewish Magical and Mystical Traditions on the Artificial Anthropoid. SUNY Series in Judaica. Albany: State University of New York Press.)
But are you really evaluating the content of transhumanism from outside your social context? Most transhumanists are humanists, and thus can trace their philosophical lineage back through the Enlightenment, the Protestant Reformation, Catholic monks translating Aramaic texts into Latin, Zoroastrians and Muslims translating Greek texts into Aramaic, and Hellenistic post-Socratic philosophers writing their ideas down in reaction to pre-Socratic ideas (and this is just where the paper trail ends). All of that context has helped shaped modern humanism, and through that context humanists have notions of what they consider epistemologically sound and what values they support. These influence how humanists evaluate the content of transhumanism.
At best we might say that because transhumanism was developed by humanists, the humanist interpretation of transhumanism is privileged because it gives perspective on the origins of the ideas, yet that doesn’t mean we can’t find other contexts in which to make sense of transhumanism. To deny them, or even just be annoyed by them, is to exert pressure against the very process that generated transhumanism in the first place: successive reinterpretation and expansion of ideas that have their origins in pre-Socratic Hellenism.
There is no way to consider transhumanism, or any idea, outside of a context; to do so is to blind oneself to the lens through which one sees the world.
If I remember this correctly, the writing itself—without which, there could be no paper trail—was invented by Phoenicians.
Phoenicians also invented money. Peter Thiel has a lot of money, and he supports transhumanism. He also supports Donald Trump.
...just adding more context...
My sentence
Contains a grammatical ambiguity; the first “their” could refer to the people or the ideas. I meant it to refer to the ideas. I’m not asking people to stop using their own social norms when they judge ideas. I am saying that the society from which an idea originated is irrelevant to judging the truth of that idea. (At least once you’ve fully understood what the idea is. Before that you might need to understand its context in order to resolve ambiguities in the description of the idea.)
So I’m not claiming that I’m not biased by my cultural heritage (although of course I aspire to be unbiased), I’m just saying that transhumanism shouldn’t be attacked or defended based on its heritage.