“But if we assume that Lenin made his decisions after the fashion of an ordinary human brain, and not by virtue of some alien mechanism seizing and overriding his decisions, then Lenin would still be exactly as much of a jerk as before.”
I must admit that I still don’t really understand this. It seems to violate what we usually mean by moral responsibility.
“When, in a highly sophisticated form of helpfulness, I project that you would-want lemonade if you knew everything I knew about the contents of the refrigerator, I do not thereby create a copy of Michael Vassar who screams that it is trapped inside my head.”
This is, I think, because humans are a tiny subset of all possible computers, and not because there’s a qualitative difference between predicting and creating. It is, for instance, possible to look at a variety of factorial algorithms, and rearrange them to predictably compute triangular numbers. This, of course, doesn’t mean that you can look at an arbitrary algorithm and determine whether it computes triangular numbers. I conjecture that, in the general case, it’s impossible to predict the output of an arbitrary Turing machine at any point along its computation without doing a calculation at least as long as the calculations the original Turing machine does. Hence, predicting the output of a mind-in-general would require at least as much computing power as running the mind-in-general.
Incidentally, I think that there’s a selection bias at work here due to our limited technology. Since we don’t yet know how to copy or create a human, all of the predictions about humans that we come up with are, by necessity, easier than creating a human. However, for most predictions on most minds, the reverse should be true. Taking Michael Vassar and creating an electronic copy (uploading), or creating a human from scratch with a set of prespecified characteristics, are both technologically feasible with tools we know how to build. Creating a quantum simulation of Michael Vassar or a generic human to predict their behavior would be utterly beyond the processing power of any classical computer.
“But if we assume that Lenin made his decisions after the fashion of an ordinary human brain, and not by virtue of some alien mechanism seizing and overriding his decisions, then Lenin would still be exactly as much of a jerk as before.”
I must admit that I still don’t really understand this. It seems to violate what we usually mean by moral responsibility.
“When, in a highly sophisticated form of helpfulness, I project that you would-want lemonade if you knew everything I knew about the contents of the refrigerator, I do not thereby create a copy of Michael Vassar who screams that it is trapped inside my head.”
This is, I think, because humans are a tiny subset of all possible computers, and not because there’s a qualitative difference between predicting and creating. It is, for instance, possible to look at a variety of factorial algorithms, and rearrange them to predictably compute triangular numbers. This, of course, doesn’t mean that you can look at an arbitrary algorithm and determine whether it computes triangular numbers. I conjecture that, in the general case, it’s impossible to predict the output of an arbitrary Turing machine at any point along its computation without doing a calculation at least as long as the calculations the original Turing machine does. Hence, predicting the output of a mind-in-general would require at least as much computing power as running the mind-in-general.
Incidentally, I think that there’s a selection bias at work here due to our limited technology. Since we don’t yet know how to copy or create a human, all of the predictions about humans that we come up with are, by necessity, easier than creating a human. However, for most predictions on most minds, the reverse should be true. Taking Michael Vassar and creating an electronic copy (uploading), or creating a human from scratch with a set of prespecified characteristics, are both technologically feasible with tools we know how to build. Creating a quantum simulation of Michael Vassar or a generic human to predict their behavior would be utterly beyond the processing power of any classical computer.