Considering all the layers of convention and interpretation between the physics of a processor and the process it represents, it seems unlikely to me that the alien would be able to describe the simulacra. The alien is therefore unable to specify the experience being created by the cluster.
I don’t think this follows. Perhaps the same calculation could simulate different real world phenomena, but it doesn’t follow that the subjective experiences are different in each case.
If computation is this arbitrary, we have the flexibility to interpret any physical system, be it a wall, a rock, or a bag of popcorn, as implementing any program. And any program means any experience. All objects are experiencing everything everywhere all at once.
Afaik this might be true. We have no way of finding out whether the rock does or does not have conscious experience. The relevant experiences to us are those that are connected to the ability to communicate or interact with the environment, such as the experiences associated with the global workspace in human brains (which seems to control memory/communication); experiences that may be associated with other neural impulses, or with fluid dynamics in the blood vessels or whatever, don’t affect anything.
Could both of them be right? No—from your point of view, at least one of them must be wrong. There is one correct answer, the experience you are having.
This also does not follow. Both experiences could happen in the same brain. You—being experience A—may not be aware of experience B—but that does not mean that experience B does not exist.
(edited to merge in other comments which I then deleted)
I think a lot of misunderstandings on this topic are because of a lack of clarity about what exact position is being debated/argued for. I think two relevant positions are
(1) it is impossible to say anything meaningful whatsoever about what any system is computing. (You could write this semi-formally as ∀ physical system x∀ computation c1,c2: the claims ”x performs c1” and ”x performs c2” are equally plausible)
(2) it is impossible to have a single, formal, universally applicable rule that tells you which computation a physical system is running that does not produce nonsense results. (I call the problem of finding such a rule the “interpretation problem”, so (2) is saying that the interpretation problem is impossible)
(1) is a much stronger claim and definitely implies (2), but (2) is already sufficient to rule out the type of realist functionalism that OP is attacking. So OP doesn’t have to argue (1). (I’m not sure if they would argue for (1), but they don’t have to in order to make their argument.) And Scott Aaronson’s essay is (as far as I can see) just arguing against (1) by proposing a criterion according to which a waterfall isn’t playing chess (whereas stockfish is). So, you can agree with him and conclude that (1) is false, but this doesn’t get you very far.
The waterfall thought experiment also doesn’t prove (2). It’s very hard to prove (2) because (2) is just saying that a problem cannot be solved, and it’s hard to prove that anything can’t be done. But the waterfall thought experiment is an argument for (2) by showing that the interpretation problem looks pretty hard. This is how I was using it in my reply to Steven on the first post of this sequence; I didn’t say “and therefore, I’ve proven that no solution to the interpretation problem exists”; I’ve just pointed out that you start off with infinitely many interpretations and currently no one has figured out how to narrow it done to just one, at least not in a way such that the answers have the properties everyone is looking for.
The argument presented by Aaronson is that, since it would take as much computation to convert the rock/waterfall computation into a usable computation as it would be to just do the usable computation directly, the rock/waterfall isn’t really doing the computation.
I find this argument unconvincing, as we are talking about a possible internal property here, and not about the external relation with the rest of the world (which we already agree is useless).
You disagree with Aaronson that the location of the complexity is in the interpreter, or you disagree that it matters?
In the first case, I’ll defer to him as the expert. But in the second, the complexity is an internal property of the system! (And it’s a property in a sense stronger than almost anything we talk about in philosophy; it’s not just a property of the world around us, because as Gödel and others showed, complexity is a necessary fact about the nature of mathematics!)
The interpreter, if it would exist, would have complexity. The useless unconnected calculation in the waterfall/rock, which could be but isn’t usually interpreted, also has complexity.
Your/Aaronson’s claim is that only the fully connected, sensibly interacting calculation matters. I agree that this calculation is important—it’s the only type we should probably consider from a moral standpoint, for example. And the complexity of that calculation certainly seems to be located in the interpreter, not in the rock/waterfall.
But in order to claim that only the externally connected calculation has conscious experience, we would need to have it be the case that these connections are essential to the internal conscious experience even in the “normal” case—and that to me is a strange claim! I find it more natural to assume that there are many internal experiences, but only some interact with the world in a sensible way.
Perhaps the same calculation could simulate different real world phenomena, but it doesn’t follow that the subjective experiences are different in each case.
I see what you mean I think—I suppose if you’re into multiple realizability perhaps the set of all physical processes that the alien settles on all implement the same experience. But this just depends on how broad this set is. If it contains two brains, one thinking about the roman empire and one eating a sandwich, we’re stuck.
This also does not follow. Both experiences could happen in the same brain. You—being experience A—may not be aware of experience B—but that does not mean that experience B does not exist.
Yea I did consider this as a counterpoint. I don’t have a good answer to this, besides it being unintuitive and violating occam’s razor in some sense.
But this just depends on how broad this set is. If it contains two brains, one thinking about the roman empire and one eating a sandwich, we’re stuck.
I suspect that if you do actually follow Aaronson (as linked by Davidmanheim) to extract a unique efficient calculation that interacts with the external world in a sensible way, that unique efficient externally-interacting calculation will end up corresponding to a consistent set of experiences, even if it could still correspond to simulations of different real-world phenomena.
But I also don’t think that consistent set of experiences necessarily has to be a single experience! It could be multiple experiences unaware of each other, for example.
I don’t think this follows. Perhaps the same calculation could simulate different real world phenomena, but it doesn’t follow that the subjective experiences are different in each case.
Afaik this might be true. We have no way of finding out whether the rock does or does not have conscious experience. The relevant experiences to us are those that are connected to the ability to communicate or interact with the environment, such as the experiences associated with the global workspace in human brains (which seems to control memory/communication); experiences that may be associated with other neural impulses, or with fluid dynamics in the blood vessels or whatever, don’t affect anything.
This also does not follow. Both experiences could happen in the same brain. You—being experience A—may not be aware of experience B—but that does not mean that experience B does not exist.
(edited to merge in other comments which I then deleted)
As with OP, I strongly recommend Aaronson, who explains why waterfalls aren’t doing computation in ways that refute the rock example you discuss: https://www.scottaaronson.com/papers/philos.pdf
I think a lot of misunderstandings on this topic are because of a lack of clarity about what exact position is being debated/argued for. I think two relevant positions are
(1) it is impossible to say anything meaningful whatsoever about what any system is computing. (You could write this semi-formally as ∀ physical system x∀ computation c1,c2: the claims ”x performs c1” and ”x performs c2” are equally plausible)
(2) it is impossible to have a single, formal, universally applicable rule that tells you which computation a physical system is running that does not produce nonsense results. (I call the problem of finding such a rule the “interpretation problem”, so (2) is saying that the interpretation problem is impossible)
(1) is a much stronger claim and definitely implies (2), but (2) is already sufficient to rule out the type of realist functionalism that OP is attacking. So OP doesn’t have to argue (1). (I’m not sure if they would argue for (1), but they don’t have to in order to make their argument.) And Scott Aaronson’s essay is (as far as I can see) just arguing against (1) by proposing a criterion according to which a waterfall isn’t playing chess (whereas stockfish is). So, you can agree with him and conclude that (1) is false, but this doesn’t get you very far.
The waterfall thought experiment also doesn’t prove (2). It’s very hard to prove (2) because (2) is just saying that a problem cannot be solved, and it’s hard to prove that anything can’t be done. But the waterfall thought experiment is an argument for (2) by showing that the interpretation problem looks pretty hard. This is how I was using it in my reply to Steven on the first post of this sequence; I didn’t say “and therefore, I’ve proven that no solution to the interpretation problem exists”; I’ve just pointed out that you start off with infinitely many interpretations and currently no one has figured out how to narrow it done to just one, at least not in a way such that the answers have the properties everyone is looking for.
The argument presented by Aaronson is that, since it would take as much computation to convert the rock/waterfall computation into a usable computation as it would be to just do the usable computation directly, the rock/waterfall isn’t really doing the computation.
I find this argument unconvincing, as we are talking about a possible internal property here, and not about the external relation with the rest of the world (which we already agree is useless).
(edit: whoops missed an ‘un’ in “unconvincing”)
You disagree with Aaronson that the location of the complexity is in the interpreter, or you disagree that it matters?
In the first case, I’ll defer to him as the expert. But in the second, the complexity is an internal property of the system! (And it’s a property in a sense stronger than almost anything we talk about in philosophy; it’s not just a property of the world around us, because as Gödel and others showed, complexity is a necessary fact about the nature of mathematics!)
The interpreter, if it would exist, would have complexity. The useless unconnected calculation in the waterfall/rock, which could be but isn’t usually interpreted, also has complexity.
Your/Aaronson’s claim is that only the fully connected, sensibly interacting calculation matters. I agree that this calculation is important—it’s the only type we should probably consider from a moral standpoint, for example. And the complexity of that calculation certainly seems to be located in the interpreter, not in the rock/waterfall.
But in order to claim that only the externally connected calculation has conscious experience, we would need to have it be the case that these connections are essential to the internal conscious experience even in the “normal” case—and that to me is a strange claim! I find it more natural to assume that there are many internal experiences, but only some interact with the world in a sensible way.
I see what you mean I think—I suppose if you’re into multiple realizability perhaps the set of all physical processes that the alien settles on all implement the same experience. But this just depends on how broad this set is. If it contains two brains, one thinking about the roman empire and one eating a sandwich, we’re stuck.
Yea I did consider this as a counterpoint. I don’t have a good answer to this, besides it being unintuitive and violating occam’s razor in some sense.
I suspect that if you do actually follow Aaronson (as linked by Davidmanheim) to extract a unique efficient calculation that interacts with the external world in a sensible way, that unique efficient externally-interacting calculation will end up corresponding to a consistent set of experiences, even if it could still correspond to simulations of different real-world phenomena.
But I also don’t think that consistent set of experiences necessarily has to be a single experience! It could be multiple experiences unaware of each other, for example.