The idea that a consciousness can exist within an alternate reading frame on a system that is not conscious in my own frame, as Peer exists within the City, does significant violence to my intuitions about consciousness.
The idea that the alternate frame can be temporally discontiguous with my own… that is, that events A and B can occur in both frames but in a different order… does additional violence to my intuitions about time.
That said, I have no reason to expect my intuitions about consciousness or time to reflect the way the universe actually is. (Of course, that doesn’t mean any particular contradictory theory is right.)
That said, without the possibility of intentional causal interaction with such alternate-frame consciousnesses, I’m indifferent to them: I can’t see any reason why I should care whether it’s true. I feel more or less the same way as I do about the possibility of epiphenomenal spirits, or epiphenomenal Everett branches: if they are in principle unable to interact causally with me, if no observation I can ever make will go differently based on their existence or nonexistence, then I simply don’t care whether they exist or not.
I don’t endorse that apathy, though. It mostly comes out of a motivational psychology in which believing that future events are significantly influenced by my actions is important to motivating those actions, and I don’t especially endorse that sort of psychology, despite instantiating one.
The initial few ‘thought experiments’ in Permutation City cheat the same way the Chinese Room does. A program capable of simulating “Durham having just finished counting to 7 and about to say 8” must have, in some way, already simulated Durham counting to 7. Similarly, Searle’s Giant Lookup Table must have come into being somehow.
You could make a similar case that choosing the right permutation of dust to create a universe requires complete knowledge of that universe. In this case, that knowledge is coming from the author.
Though I guess a lot depends on whether the computation* of Durham counting to 8 requires the computation of Durham being aware of having counted to 7. If it doesn’t, then the program can produce the following sequence: x7 = Durham->countTo(7); x8 = Durham->countTo(8); Durham->awareOf(x8); Durham->awareOf(x7); with the result that Durham goes “8, 7” without any cheating.
The question of whether Durham experiences “7,8” or “8,7″ is less clear, though.
I’m more comfortable talking about computation rather than simulation here, because I’m not at all convinced that there’s any difference between a real counting-to-7 and a simulated counting-to-7. I don’t think the distinction actually matters in this context though.
With thanks to HonoreDB, yes, the structure must have a source. And also, as with the Chinese Room, there is a sleight-of-concept going on where something that looks like a human (Searle’s paper manipulator and Egan’s Durham) is not the actual “brains” of the system (which are really the symbol manipulation rules with Searle, or the dust/translator combination with Egan) that we’re truly analyzing.
I agree with you that if there is not stateful process to worry about, but merely the instantiation of a trivially predictable “movie-like image of counting the number 8” then the dust hypothesis might make sense… but I suspect that very few of the phenomena that we care about are like this, nor do I think that such phenomena are going to be interesting to us post-uploading. I can’t fully and succinctly explain the intuition I have here, but the core of the objection it is connected to reversible computing, computational irreducibility, and their relation to entropy and hence the expenditure of energy.
From these inspirations, it seems likely to me that “the dust” can only be said to contain structure that I care about if the energy used to identify/extract/observe that structure is less than what would have been required for an optimally efficient computational process to invent that structure from scratch. Thus, there is probably a mechanically rigorous way to distinguish between hearing a sound versus imagining that same sound, that grows out of the way that hearing requires fewer joules than imagining. If a dust interpretation system requires too much energy, I would guess either than it is mediating a scientifically astonishing real signal (in a grossly inefficent way)… or you’re dealing with a sort of clever hans effect where the interpretation system plus its battery is the real source of the “detected patterns”, not the dust.
Using this vocabulary to speak directly to the issues raised in the article on strong substrate independence, the problem with other quantum narratives (or the bits of platospace mathematicians spend their time “exploring”) is that the laws of physical computation seem such that our brains can never hear anything from those “places”, our brains can only imagine them.
Yes, that seems like a reasonable way to state more rigorously the distinction between systems I might care about and systems I categorically don’t care about.
Though, thinking about Permutation City a bit more… we, as readers of the novel, have access to the frame in which Peer’s consciousness manifests. The residents of PC don’t have access to it; Peer is no easier for them to access than the infinite number of other consciousnesses they could in principle “detect” within their architecture.
So we care about Peer, and they don’t, and neither of us cares about the infinite number of Peer’s peers. Makes sense.
But there is a difference: their history includes the programming exploit that created the space in which Peer exists, and the events that led to Peer existing within it. One can imagine a resident of PC finding those design notes and building a gadget based on them to encounter Peer, and this would not require implausible amounts of either energy or luck.
And I guess the existence of those design notes would make me care more about Peer than about his peers, were I a resident of PC… which is exactly what I’d predict from this theory.
Hm. The Chinese Room seems to be different in my head than on wikipedia. I guess I assumed that writing a book that covers all possible inputs convincingly would necessarily involve lots of brute force.
Well, the man in the Chinese Room is supposed to be manually ‘stepping through’ an algorithm that can respond intelligently to questions in Chinese. He’s not necessarily just “matching up” inputs with outputs, although Searle wants you to think that he may as well just be doing that.
Searle seems to have very little appreciation of how complicated his program would have to be, though to be fair, his intuitions were shaped by chatbots like Eliza.
Anyway, the “Systems Reply” is correct (hurrah—we have a philosophical “result”). Even those philosophers who think this is in some way controversial ought to agree that it’s irrelevant whether the man in the room understands Chinese, because he is analogous to the CPU, not the program.
Therefore, his thought experiment has zero value—if you can imagine a conscious machine then you can imagine the “Systems Reply” being correct, and if you can’t, you can’t.
searle is an idiot, the nebulous “understanding” he talks about in the original paper is obviously informationally contained in the algorithm. the degree to which someone believes that “understanding” can’t be contained in an algorithm is the degree to which they believe in dualism. just because executing an algorithm from the inside feels like something we label understanding doesn’t make it magic.
The idea that a consciousness can exist within an alternate reading frame on a system that is not conscious in my own frame, as Peer exists within the City, does significant violence to my intuitions about consciousness.
How about, instead of an opaquely described “alternate reading frame”, we consider homomorphic encryption. Take some uploads in a closed environment, homomorphically encrypt the whole thing, throw away the decryption key, and then start it running. I think this matches Peer’s situation in all relevant aspects: The information about the uploads exists in the ordinary computational basis (not talking Dust Theory here), and there is a short and fast program to extract it, but it’s computationally intractable to find that program if you don’t know the secret. The difference is that this way it’s much more obvious what that secret would look like.
Yeah, I basically agree, and that does less violence to my intuitions on the subject… still more evidence, were it needed, that my intuitions on the subject are unreliable.
Indeed, I’m not even sure how relevant the computational intractability of breaking the encryption is. That is, I’m not actually sure how Peer’s situation is relevantly different from my own with respect to someone sitting in another building somewhere… what matters about both of them is simply that we aren’t interacting with one another in any important way.
The degree to which the counterfactual story about how we might interact with one another seems plausible is relevant to my intuitions about those consciousnesses, as you say, but it doesn’t seem at all relevant to anything outside of those intuitions.
The idea that a consciousness can exist within an alternate reading frame on a system that is not conscious in my own frame, as Peer exists within the City, does significant violence to my intuitions about consciousness.
The idea that the alternate frame can be temporally discontiguous with my own… that is, that events A and B can occur in both frames but in a different order… does additional violence to my intuitions about time.
That said, I have no reason to expect my intuitions about consciousness or time to reflect the way the universe actually is. (Of course, that doesn’t mean any particular contradictory theory is right.)
That said, without the possibility of intentional causal interaction with such alternate-frame consciousnesses, I’m indifferent to them: I can’t see any reason why I should care whether it’s true. I feel more or less the same way as I do about the possibility of epiphenomenal spirits, or epiphenomenal Everett branches: if they are in principle unable to interact causally with me, if no observation I can ever make will go differently based on their existence or nonexistence, then I simply don’t care whether they exist or not.
I don’t endorse that apathy, though. It mostly comes out of a motivational psychology in which believing that future events are significantly influenced by my actions is important to motivating those actions, and I don’t especially endorse that sort of psychology, despite instantiating one.
I don’t see the connection to Searle’s CR.
The initial few ‘thought experiments’ in Permutation City cheat the same way the Chinese Room does. A program capable of simulating “Durham having just finished counting to 7 and about to say 8” must have, in some way, already simulated Durham counting to 7. Similarly, Searle’s Giant Lookup Table must have come into being somehow.
You could make a similar case that choosing the right permutation of dust to create a universe requires complete knowledge of that universe. In this case, that knowledge is coming from the author.
Ah, I see. Sure, agreed.
Though I guess a lot depends on whether the computation* of Durham counting to 8 requires the computation of Durham being aware of having counted to 7. If it doesn’t, then the program can produce the following sequence: x7 = Durham->countTo(7); x8 = Durham->countTo(8); Durham->awareOf(x8); Durham->awareOf(x7); with the result that Durham goes “8, 7” without any cheating.
The question of whether Durham experiences “7,8” or “8,7″ is less clear, though.
I’m more comfortable talking about computation rather than simulation here, because I’m not at all convinced that there’s any difference between a real counting-to-7 and a simulated counting-to-7. I don’t think the distinction actually matters in this context though.
With thanks to HonoreDB, yes, the structure must have a source. And also, as with the Chinese Room, there is a sleight-of-concept going on where something that looks like a human (Searle’s paper manipulator and Egan’s Durham) is not the actual “brains” of the system (which are really the symbol manipulation rules with Searle, or the dust/translator combination with Egan) that we’re truly analyzing.
I agree with you that if there is not stateful process to worry about, but merely the instantiation of a trivially predictable “movie-like image of counting the number 8” then the dust hypothesis might make sense… but I suspect that very few of the phenomena that we care about are like this, nor do I think that such phenomena are going to be interesting to us post-uploading. I can’t fully and succinctly explain the intuition I have here, but the core of the objection it is connected to reversible computing, computational irreducibility, and their relation to entropy and hence the expenditure of energy.
From these inspirations, it seems likely to me that “the dust” can only be said to contain structure that I care about if the energy used to identify/extract/observe that structure is less than what would have been required for an optimally efficient computational process to invent that structure from scratch. Thus, there is probably a mechanically rigorous way to distinguish between hearing a sound versus imagining that same sound, that grows out of the way that hearing requires fewer joules than imagining. If a dust interpretation system requires too much energy, I would guess either than it is mediating a scientifically astonishing real signal (in a grossly inefficent way)… or you’re dealing with a sort of clever hans effect where the interpretation system plus its battery is the real source of the “detected patterns”, not the dust.
Using this vocabulary to speak directly to the issues raised in the article on strong substrate independence, the problem with other quantum narratives (or the bits of platospace mathematicians spend their time “exploring”) is that the laws of physical computation seem such that our brains can never hear anything from those “places”, our brains can only imagine them.
Yes, that seems like a reasonable way to state more rigorously the distinction between systems I might care about and systems I categorically don’t care about.
Though, thinking about Permutation City a bit more… we, as readers of the novel, have access to the frame in which Peer’s consciousness manifests. The residents of PC don’t have access to it; Peer is no easier for them to access than the infinite number of other consciousnesses they could in principle “detect” within their architecture.
So we care about Peer, and they don’t, and neither of us cares about the infinite number of Peer’s peers. Makes sense.
But there is a difference: their history includes the programming exploit that created the space in which Peer exists, and the events that led to Peer existing within it. One can imagine a resident of PC finding those design notes and building a gadget based on them to encounter Peer, and this would not require implausible amounts of either energy or luck.
And I guess the existence of those design notes would make me care more about Peer than about his peers, were I a resident of PC… which is exactly what I’d predict from this theory.
OK, then.
That’s not due to Searle—you’re talking about Ned Block’s “Blockhead”.
Hm. The Chinese Room seems to be different in my head than on wikipedia. I guess I assumed that writing a book that covers all possible inputs convincingly would necessarily involve lots of brute force.
Well, the man in the Chinese Room is supposed to be manually ‘stepping through’ an algorithm that can respond intelligently to questions in Chinese. He’s not necessarily just “matching up” inputs with outputs, although Searle wants you to think that he may as well just be doing that.
Searle seems to have very little appreciation of how complicated his program would have to be, though to be fair, his intuitions were shaped by chatbots like Eliza.
Anyway, the “Systems Reply” is correct (hurrah—we have a philosophical “result”). Even those philosophers who think this is in some way controversial ought to agree that it’s irrelevant whether the man in the room understands Chinese, because he is analogous to the CPU, not the program.
Therefore, his thought experiment has zero value—if you can imagine a conscious machine then you can imagine the “Systems Reply” being correct, and if you can’t, you can’t.
searle is an idiot, the nebulous “understanding” he talks about in the original paper is obviously informationally contained in the algorithm. the degree to which someone believes that “understanding” can’t be contained in an algorithm is the degree to which they believe in dualism. just because executing an algorithm from the inside feels like something we label understanding doesn’t make it magic.
How about, instead of an opaquely described “alternate reading frame”, we consider homomorphic encryption. Take some uploads in a closed environment, homomorphically encrypt the whole thing, throw away the decryption key, and then start it running. I think this matches Peer’s situation in all relevant aspects: The information about the uploads exists in the ordinary computational basis (not talking Dust Theory here), and there is a short and fast program to extract it, but it’s computationally intractable to find that program if you don’t know the secret. The difference is that this way it’s much more obvious what that secret would look like.
Yeah, I basically agree, and that does less violence to my intuitions on the subject… still more evidence, were it needed, that my intuitions on the subject are unreliable.
Indeed, I’m not even sure how relevant the computational intractability of breaking the encryption is. That is, I’m not actually sure how Peer’s situation is relevantly different from my own with respect to someone sitting in another building somewhere… what matters about both of them is simply that we aren’t interacting with one another in any important way.
The degree to which the counterfactual story about how we might interact with one another seems plausible is relevant to my intuitions about those consciousnesses, as you say, but it doesn’t seem at all relevant to anything outside of those intuitions.