Another interesting comment by Scott on why he is less of a “pure reductionist” than he used to be. One of his many points is related to “singulatarians”:
My contacts with the singularity movement, and especially with Robin Hanson and Eliezer Yudkowsky, who I regard as two of the most interesting thinkers now alive (Nick Bostrom is another). I give the singulatarians enormous credit for taking the computational theory of mind to its logical conclusions—for not just (like most scientifically-minded people) paying lip service to it, but trying extremely hard to think through what it will actually be like when and if we all exist as digital uploads, who can make trillions of copies of ourselves, maybe “rewind” anything that happened to us that we didn’t like, etc. What will ethics look like in such a world? What will the simulated beings value, and what should they value? At the same time, the very specificity of the scenarios that the singulatarians obsessed about left a funny taste in my mouth: when I read (for example) the lengthy discourses about the programmer in his basement clicking “Run” on a newly-created AI, which then (because of bad programming) promptly sets about converting the whole observable universe into paperclips, I was less terrified than amused: what were the chances that, out of all the possible futures, ours would so perfectly fit the mold of a dark science-fiction comedy? Whereas the singulatarians reasoned:
“Our starting assumptions are probably right, ergo we can say with some confidence that the future will involve trillions of identical uploaded minds maximizing their utility functions, unless of course the Paperclip-Maximizer ‘clips’ it all in the bud”
I accepted the importance and correctness of their inference, but I ran it in the opposite direction:
“It seems obvious that we can’t say such things with any confidence, ergo the starting assumptions ought to be carefully revisited—even the ones about mind and computation that most scientifically-literate people say they agree with.”
“Our starting assumptions are probably right, ergo we can say with some confidence that the future will involve trillions of identical uploaded minds maximizing their utility functions, unless of course the Paperclip-Maximizer ‘clips’ it all in the bud”
I accepted the importance and correctness of their inference, but I ran it in the opposite direction:
“It seems obvious that we can’t say such things with any confidence, ergo the starting assumptions ought to be carefully revisited—even the ones about mind and computation that most scientifically-literate people say they agree with.”
I don’t see how Scott’s proposed revision of the starting assumptions actually changes the conclusions. Even if he is right that uploads and AIs with a “digital abstraction layer” can’t be conscious, that’s not going to stop a future involving trillions of uploads, or stop paperclip maximizers.
If these uploads are p-zombies (Scott-zombies?) because they are reversible computations, then their welfare doesn’t matter. I don’t think he says that it prevents paperclip maximizers.
He suggests that all those uploads might not be conscious if they are run on a quantum computer reversibly (or have some other “clean digital abstraction layer”). He states that this is a huge speculation, but it is still an alternative not usually considered by the orthodox reductionists.
Another interesting comment by Scott on why he is less of a “pure reductionist” than he used to be. One of his many points is related to “singulatarians”:
I don’t see how Scott’s proposed revision of the starting assumptions actually changes the conclusions. Even if he is right that uploads and AIs with a “digital abstraction layer” can’t be conscious, that’s not going to stop a future involving trillions of uploads, or stop paperclip maximizers.
If these uploads are p-zombies (Scott-zombies?) because they are reversible computations, then their welfare doesn’t matter. I don’t think he says that it prevents paperclip maximizers.
So Scott meant to argue against “the future should involve trillions of uploads” rather than “the future will involve trillions of uploads”?
He suggests that all those uploads might not be conscious if they are run on a quantum computer reversibly (or have some other “clean digital abstraction layer”). He states that this is a huge speculation, but it is still an alternative not usually considered by the orthodox reductionists.
The first serious attempt at this that I’ve seen is Permutation City which came out 20 years ago.