(1) I agree that we can easily conceive of a world where most entities able to pass the Turing Test are copyable. I agree that it’s extremely interesting to think about what such a world would be like—and maybe even try to prepare for it if we can. And as for how the copyable entities will reason about their own existence—well, that might depend on the goals of whoever or whatever set them loose! As a simple example, the Stuxnet worm eventually deleted itself, if it decided it was on a computer that had nothing to do with Iranian centrifuges. We can imagine that each copy “knew” about the others, and “knew” that it might need to kill itself for the benefit of its doppelgangers. And as for why it behaved that way—well, we could answer that question in terms of the code, or in terms of the intentions of the people who wrote the code. Of course, if the code hadn’t been written by anyone, but was instead (say) the outcome of some evolutionary process, then we’d have to look for an explanation in terms of that process. But of course it would help to have the code to examine!
(2) You argue that, if I were copyable, then the copies would wonder about the same puzzles that the “uncopyable” version wonders about—and for that reason, it can’t be legitimate even to try to resolve those puzzles by assuming that I’m not copyable. Compare to the following argument: if I were a character in a novel, then that character would say exactly the same things I say for the same reasons, and wonder about the same things that I wonder about. Therefore, when reasoning about (say) physics or cosmology, it’s illegitimate even to make the tentative assumption that I’m not a character in a novel. This is a fun argument, but there are several possible responses, among them: haven’t we just begged the question, by assuming there is something it’s like to be a copyable em or a character in a novel? Again, I don’t declare with John Searle that there’s obviously nothing that it’s like, if you think there is then you need your head examined, etc. etc. On the other hand, even if I were a character in a novel, I’d still be happy to have that character assume it wasn’t a character—that its world was “real”—and see how far it could get with that assumption.
(3) No, I absolutely don’t think that we can learn whether we’re copyable or not by “introspecting on the quality of our subjective experience,” or that we’ll ever be able to do such a thing. The sort of thing that might eventually give us insight into whether we’re copyable or not would be understanding the effect of microscopic noise on the sodium-ion channels, whether the noise can be grounded in PMDs, etc. If you’ll let me quote from Sec. 2.1 of my essay: “precisely because one can’t decide between conflicting introspective reports, in this essay I’ll be exclusively interested in what can be learned from scientific observation and argument. Appeals to inner experience—including my own and the reader’s—will be out of bounds.”
And as for how the copyable entities will reason about their own existence
I’m not interested so much in how they will reason, but in how they should reason.
The sort of thing that might eventually give us insight into whether we’re copyable or not would be understanding the effect of microscopic noise on the sodium-ion channels, whether the noise can be grounded in PMDs, etc.
When you say “we” here, do you literally mean “we” or do you mean “biological humans”? Because I can see how understanding the effect of microscopic noise on the sodium-ion channels might give us insight into whether biological humans are copyable, but it doesn’t seem to tell us whether we are biological humans or for example digital simulations (and therefore whether your proposed solution to the philosophical puzzles is of any relevance to us). I thought you were proposing that if your theory is correct then we would eventually be able to determine that by introspection, since you said copyable minds might have no subjective experience or a different kind of subjective experience.
(1) Well, that’s the funny thing about “should”: if copyable entities have a definite goal (e.g., making as many additional copies as possible, taking over the world...), then we simply need to ask what form of reasoning will best help them achieve the goal. If, on the other hand, the question is, “how should a copy reason, so as to accord with its own subjective experience? e.g., all else equal, will it be twice as likely to ‘find itself’ in a possible world with twice as many copies?”—then we need some account of the subjective experience of copyable entities before we can even start to answer the question.
(2) Yes, certainly it’s possible that we’re all living in a digital simulation—in which case, maybe we’re uncopyable from within the simulation, but copyable by someone outside the simulation with “sysadmin access.” But in that case, what can I do, except try to reason based on the best theories we can formulate from within the simulation? It’s no different than with any “ordinary” scientific question.
(3) Yes, I raised the possibility that copyable minds might have no subjective experience or a different kind of subjective experience, but I certainly don’t think we can determine the truth of that possibility by introspection—or for that matter, even by “extrospection”! :-) The most we could do, maybe, is investigate whether the physical substrate of our minds makes them uncopyable, and therefore whether it’s even logically coherent to imagine a distinction between them and copyable minds.
The most we could do, maybe, is investigate whether the physical substrate of our minds makes them uncopyable, and therefore whether it’s even logically coherent to imagine a distinction between them and copyable minds.
If that’s the most you’re expecting to show at the end of your research program, then I don’t understand why you see it as a “hope” of avoiding the philosophical difficulties you mentioned. (I mean I have no problems with it as a scientific investigation in general, it’s just that it doesn’t seem to solve the problems that originally motivated you.) For example according to Nick Bostrom’s Simulation Argument, most human-like minds in our universe are digital simulations run by posthumans. How do you hope to conclude that the simulations “shouldn’t even be included in my reference class” if you don’t hope to conclude that you, personally, are not copyable?
(1) I agree that we can easily conceive of a world where most entities able to pass the Turing Test are copyable. I agree that it’s extremely interesting to think about what such a world would be like—and maybe even try to prepare for it if we can. And as for how the copyable entities will reason about their own existence—well, that might depend on the goals of whoever or whatever set them loose! As a simple example, the Stuxnet worm eventually deleted itself, if it decided it was on a computer that had nothing to do with Iranian centrifuges. We can imagine that each copy “knew” about the others, and “knew” that it might need to kill itself for the benefit of its doppelgangers. And as for why it behaved that way—well, we could answer that question in terms of the code, or in terms of the intentions of the people who wrote the code. Of course, if the code hadn’t been written by anyone, but was instead (say) the outcome of some evolutionary process, then we’d have to look for an explanation in terms of that process. But of course it would help to have the code to examine!
(2) You argue that, if I were copyable, then the copies would wonder about the same puzzles that the “uncopyable” version wonders about—and for that reason, it can’t be legitimate even to try to resolve those puzzles by assuming that I’m not copyable. Compare to the following argument: if I were a character in a novel, then that character would say exactly the same things I say for the same reasons, and wonder about the same things that I wonder about. Therefore, when reasoning about (say) physics or cosmology, it’s illegitimate even to make the tentative assumption that I’m not a character in a novel. This is a fun argument, but there are several possible responses, among them: haven’t we just begged the question, by assuming there is something it’s like to be a copyable em or a character in a novel? Again, I don’t declare with John Searle that there’s obviously nothing that it’s like, if you think there is then you need your head examined, etc. etc. On the other hand, even if I were a character in a novel, I’d still be happy to have that character assume it wasn’t a character—that its world was “real”—and see how far it could get with that assumption.
(3) No, I absolutely don’t think that we can learn whether we’re copyable or not by “introspecting on the quality of our subjective experience,” or that we’ll ever be able to do such a thing. The sort of thing that might eventually give us insight into whether we’re copyable or not would be understanding the effect of microscopic noise on the sodium-ion channels, whether the noise can be grounded in PMDs, etc. If you’ll let me quote from Sec. 2.1 of my essay: “precisely because one can’t decide between conflicting introspective reports, in this essay I’ll be exclusively interested in what can be learned from scientific observation and argument. Appeals to inner experience—including my own and the reader’s—will be out of bounds.”
I’m not interested so much in how they will reason, but in how they should reason.
When you say “we” here, do you literally mean “we” or do you mean “biological humans”? Because I can see how understanding the effect of microscopic noise on the sodium-ion channels might give us insight into whether biological humans are copyable, but it doesn’t seem to tell us whether we are biological humans or for example digital simulations (and therefore whether your proposed solution to the philosophical puzzles is of any relevance to us). I thought you were proposing that if your theory is correct then we would eventually be able to determine that by introspection, since you said copyable minds might have no subjective experience or a different kind of subjective experience.
(1) Well, that’s the funny thing about “should”: if copyable entities have a definite goal (e.g., making as many additional copies as possible, taking over the world...), then we simply need to ask what form of reasoning will best help them achieve the goal. If, on the other hand, the question is, “how should a copy reason, so as to accord with its own subjective experience? e.g., all else equal, will it be twice as likely to ‘find itself’ in a possible world with twice as many copies?”—then we need some account of the subjective experience of copyable entities before we can even start to answer the question.
(2) Yes, certainly it’s possible that we’re all living in a digital simulation—in which case, maybe we’re uncopyable from within the simulation, but copyable by someone outside the simulation with “sysadmin access.” But in that case, what can I do, except try to reason based on the best theories we can formulate from within the simulation? It’s no different than with any “ordinary” scientific question.
(3) Yes, I raised the possibility that copyable minds might have no subjective experience or a different kind of subjective experience, but I certainly don’t think we can determine the truth of that possibility by introspection—or for that matter, even by “extrospection”! :-) The most we could do, maybe, is investigate whether the physical substrate of our minds makes them uncopyable, and therefore whether it’s even logically coherent to imagine a distinction between them and copyable minds.
If that’s the most you’re expecting to show at the end of your research program, then I don’t understand why you see it as a “hope” of avoiding the philosophical difficulties you mentioned. (I mean I have no problems with it as a scientific investigation in general, it’s just that it doesn’t seem to solve the problems that originally motivated you.) For example according to Nick Bostrom’s Simulation Argument, most human-like minds in our universe are digital simulations run by posthumans. How do you hope to conclude that the simulations “shouldn’t even be included in my reference class” if you don’t hope to conclude that you, personally, are not copyable?