I think a lot of misunderstandings on this topic are because of a lack of clarity about what exact position is being debated/argued for. I think two relevant positions are
(1) it is impossible to say anything meaningful whatsoever about what any system is computing. (You could write this semi-formally as ∀ physical system x∀ computation c1,c2: the claims ”x performs c1” and ”x performs c2” are equally plausible)
(2) it is impossible to have a single, formal, universally applicable rule that tells you which computation a physical system is running that does not produce nonsense results. (I call the problem of finding such a rule the “interpretation problem”, so (2) is saying that the interpretation problem is impossible)
(1) is a much stronger claim and definitely implies (2), but (2) is already sufficient to rule out the type of realist functionalism that OP is attacking. So OP doesn’t have to argue (1). (I’m not sure if they would argue for (1), but they don’t have to in order to make their argument.) And Scott Aaronson’s essay is (as far as I can see) just arguing against (1) by proposing a criterion according to which a waterfall isn’t playing chess (whereas stockfish is). So, you can agree with him and conclude that (1) is false, but this doesn’t get you very far.
The waterfall thought experiment also doesn’t prove (2). It’s very hard to prove (2) because (2) is just saying that a problem cannot be solved, and it’s hard to prove that anything can’t be done. But the waterfall thought experiment is an argument for (2) by showing that the interpretation problem looks pretty hard. This is how I was using it in my reply to Steven on the first post of this sequence; I didn’t say “and therefore, I’ve proven that no solution to the interpretation problem exists”; I’ve just pointed out that you start off with infinitely many interpretations and currently no one has figured out how to narrow it done to just one, at least not in a way such that the answers have the properties everyone is looking for.
a) computation that actually interacts in a comprehensible way with the real world and
b) computation that has the same internal structure at least momentarily but doesn’t interact meaningfully with the real world.
I expect that (a) can usually be uniquely pinned down to a specific computation (probably in both senses (1) and (2)), while (b) can’t.
But I also think it’s possible that the interactions, while important for establishing the disambiguated computation that we interact with, are not actually crucial to internal experience, so that the multiple possible computations of type (b) may also be associated with internal experiences—similar to Boltzmann brains.
(I think I got this idea from “Good and Real” by Gary L. Drescher. See sections “2.3 The Problematic Arbitrariness of Representation” and “7.2.3 Consciousness and Subjunctive Reciprocity”)
The argument presented by Aaronson is that, since it would take as much computation to convert the rock/waterfall computation into a usable computation as it would be to just do the usable computation directly, the rock/waterfall isn’t really doing the computation.
I find this argument unconvincing, as we are talking about a possible internal property here, and not about the external relation with the rest of the world (which we already agree is useless).
You disagree with Aaronson that the location of the complexity is in the interpreter, or you disagree that it matters?
In the first case, I’ll defer to him as the expert. But in the second, the complexity is an internal property of the system! (And it’s a property in a sense stronger than almost anything we talk about in philosophy; it’s not just a property of the world around us, because as Gödel and others showed, complexity is a necessary fact about the nature of mathematics!)
The interpreter, if it would exist, would have complexity. The useless unconnected calculation in the waterfall/rock, which could be but isn’t usually interpreted, also has complexity.
Your/Aaronson’s claim is that only the fully connected, sensibly interacting calculation matters. I agree that this calculation is important—it’s the only type we should probably consider from a moral standpoint, for example. And the complexity of that calculation certainly seems to be located in the interpreter, not in the rock/waterfall.
But in order to claim that only the externally connected calculation has conscious experience, we would need to have it be the case that these connections are essential to the internal conscious experience even in the “normal” case—and that to me is a strange claim! I find it more natural to assume that there are many internal experiences, but only some interact with the world in a sensible way.
Your/Aaronson’s claim is that only the fully connected, sensibly interacting calculation matters.
Not at all. I’m not making any claim about what matters or counts here, just pointing out a confusion in the claims that were made here and by many philosophers who discussed the topic.
As with OP, I strongly recommend Aaronson, who explains why waterfalls aren’t doing computation in ways that refute the rock example you discuss: https://www.scottaaronson.com/papers/philos.pdf
I think a lot of misunderstandings on this topic are because of a lack of clarity about what exact position is being debated/argued for. I think two relevant positions are
(1) it is impossible to say anything meaningful whatsoever about what any system is computing. (You could write this semi-formally as ∀ physical system x∀ computation c1,c2: the claims ”x performs c1” and ”x performs c2” are equally plausible)
(2) it is impossible to have a single, formal, universally applicable rule that tells you which computation a physical system is running that does not produce nonsense results. (I call the problem of finding such a rule the “interpretation problem”, so (2) is saying that the interpretation problem is impossible)
(1) is a much stronger claim and definitely implies (2), but (2) is already sufficient to rule out the type of realist functionalism that OP is attacking. So OP doesn’t have to argue (1). (I’m not sure if they would argue for (1), but they don’t have to in order to make their argument.) And Scott Aaronson’s essay is (as far as I can see) just arguing against (1) by proposing a criterion according to which a waterfall isn’t playing chess (whereas stockfish is). So, you can agree with him and conclude that (1) is false, but this doesn’t get you very far.
The waterfall thought experiment also doesn’t prove (2). It’s very hard to prove (2) because (2) is just saying that a problem cannot be solved, and it’s hard to prove that anything can’t be done. But the waterfall thought experiment is an argument for (2) by showing that the interpretation problem looks pretty hard. This is how I was using it in my reply to Steven on the first post of this sequence; I didn’t say “and therefore, I’ve proven that no solution to the interpretation problem exists”; I’ve just pointed out that you start off with infinitely many interpretations and currently no one has figured out how to narrow it done to just one, at least not in a way such that the answers have the properties everyone is looking for.
Seems worth noting that the claims of most of the philosophers being cited here is (1) - that even rocks are doing the same computation as minds.
You can also disambiguate between
a) computation that actually interacts in a comprehensible way with the real world and
b) computation that has the same internal structure at least momentarily but doesn’t interact meaningfully with the real world.
I expect that (a) can usually be uniquely pinned down to a specific computation (probably in both senses (1) and (2)), while (b) can’t.
But I also think it’s possible that the interactions, while important for establishing the disambiguated computation that we interact with, are not actually crucial to internal experience, so that the multiple possible computations of type (b) may also be associated with internal experiences—similar to Boltzmann brains.
(I think I got this idea from “Good and Real” by Gary L. Drescher. See sections “2.3 The Problematic Arbitrariness of Representation” and “7.2.3 Consciousness and Subjunctive Reciprocity”)
The argument presented by Aaronson is that, since it would take as much computation to convert the rock/waterfall computation into a usable computation as it would be to just do the usable computation directly, the rock/waterfall isn’t really doing the computation.
I find this argument unconvincing, as we are talking about a possible internal property here, and not about the external relation with the rest of the world (which we already agree is useless).
(edit: whoops missed an ‘un’ in “unconvincing”)
You disagree with Aaronson that the location of the complexity is in the interpreter, or you disagree that it matters?
In the first case, I’ll defer to him as the expert. But in the second, the complexity is an internal property of the system! (And it’s a property in a sense stronger than almost anything we talk about in philosophy; it’s not just a property of the world around us, because as Gödel and others showed, complexity is a necessary fact about the nature of mathematics!)
The interpreter, if it would exist, would have complexity. The useless unconnected calculation in the waterfall/rock, which could be but isn’t usually interpreted, also has complexity.
Your/Aaronson’s claim is that only the fully connected, sensibly interacting calculation matters. I agree that this calculation is important—it’s the only type we should probably consider from a moral standpoint, for example. And the complexity of that calculation certainly seems to be located in the interpreter, not in the rock/waterfall.
But in order to claim that only the externally connected calculation has conscious experience, we would need to have it be the case that these connections are essential to the internal conscious experience even in the “normal” case—and that to me is a strange claim! I find it more natural to assume that there are many internal experiences, but only some interact with the world in a sensible way.
Not at all. I’m not making any claim about what matters or counts here, just pointing out a confusion in the claims that were made here and by many philosophers who discussed the topic.