I think there’s a rule-of-thumby reading of this that makes a little bit more sense. It’s still prejudiced, though.
A lot of religions have a narrative that ends in true believers being saved from death and pain and after that people aren’t going to struggle over petty issues like scarcity of goods and things. I run into transhumanists every so often who have bolted these ideas onto their narratives. According to some of these people, the robots are going to try hard to end suffering and poverty, and they’re going to make sure most of the humans will live forever. In practice, that goal is dubious from a thermodynamics perspective and if it wasn’t, some of our smarter robots are currently doing high-frequency trading and winning ad revenue for Google employees. That alone has probably increased net human suffering—and they’re not even superintelligent
I imagine some transhumanism fans must have good reasons to put these things in the narrative, but I think it’s extremely worth pointing out that these are ideas humans love aesthetically. If it’s true, great for us, but it’s a very pretty version of the truth. Even if I’m wrong, I’m skeptical of people who try to make definite assertions about what superintelligences will do, because if we knew what superintelligences would do then we wouldn’t need superintelligences. It would really surprise me if it looked just like one of our salvation narratives.
(obligatory nitpick disclaimer: a superintelligence can be surprising in some domains and predictable in others, but I don’t think this defeats my point, because for the conditions of these peoples’ narrative to be met, we need the superintelligence to do things we wouldn’t have thought of in most of the domains relevant to creating a utopia)
This argument notably holds true of FAI / control theory efforts. Proponents of FAI asset that heaven-on-Earth utopian futures are not inevitable outcomes, but rather low probability possibilities they must work towards. It still seems overtly religious and weird to those of us who are not convinced that utopian outcomes are even possible / logically consistent.
If you’re not convinced that utopian outcomes are even possible, isn’t that completely compatible with the claim that utopian futures are not inevitable and low-probability?
If you are not convinced that heaven is even possible, isn’t that completely compatible with Jesus saying that “narrow is the way, which leadeth unto life, and few there be that find it”?
I think there’s a rule-of-thumby reading of this that makes a little bit more sense. It’s still prejudiced, though.
A lot of religions have a narrative that ends in true believers being saved from death and pain and after that people aren’t going to struggle over petty issues like scarcity of goods and things. I run into transhumanists every so often who have bolted these ideas onto their narratives. According to some of these people, the robots are going to try hard to end suffering and poverty, and they’re going to make sure most of the humans will live forever. In practice, that goal is dubious from a thermodynamics perspective and if it wasn’t, some of our smarter robots are currently doing high-frequency trading and winning ad revenue for Google employees. That alone has probably increased net human suffering—and they’re not even superintelligent
I imagine some transhumanism fans must have good reasons to put these things in the narrative, but I think it’s extremely worth pointing out that these are ideas humans love aesthetically. If it’s true, great for us, but it’s a very pretty version of the truth. Even if I’m wrong, I’m skeptical of people who try to make definite assertions about what superintelligences will do, because if we knew what superintelligences would do then we wouldn’t need superintelligences. It would really surprise me if it looked just like one of our salvation narratives.
(obligatory nitpick disclaimer: a superintelligence can be surprising in some domains and predictable in others, but I don’t think this defeats my point, because for the conditions of these peoples’ narrative to be met, we need the superintelligence to do things we wouldn’t have thought of in most of the domains relevant to creating a utopia)
This argument notably holds true of FAI / control theory efforts. Proponents of FAI asset that heaven-on-Earth utopian futures are not inevitable outcomes, but rather low probability possibilities they must work towards. It still seems overtly religious and weird to those of us who are not convinced that utopian outcomes are even possible / logically consistent.
If you’re not convinced that utopian outcomes are even possible, isn’t that completely compatible with the claim that utopian futures are not inevitable and low-probability?
Let’s try this on religion:
If you are not convinced that heaven is even possible, isn’t that completely compatible with Jesus saying that “narrow is the way, which leadeth unto life, and few there be that find it”?
Ummm… not quite.
“low probability possibilities they must work towards”
It’s weird to devote your life to something that is impossible / logically inconsistent.