I think a lot of misunderstandings on this topic are because of a lack of clarity about what exact position is being debated/argued for. I think two relevant positions are
(1) it is impossible to say anything meaningful whatsoever about what any system is computing. (You could write this semi-formally as ∀ physical system x∀ computation c1,c2: the claims ”x performs c1” and ”x performs c2” are equally plausible)
(2) it is impossible to have a single, formal, universally applicable rule that tells you which computation a physical system is running that does not produce nonsense results. (I call the problem of finding such a rule the “interpretation problem”, so (2) is saying that the interpretation problem is impossible)
(1) is a much stronger claim and definitely implies (2), but (2) is already sufficient to rule out the type of realist functionalism that OP is attacking. So OP doesn’t have to argue (1). (I’m not sure if they would argue for (1), but they don’t have to in order to make their argument.) And Scott Aaronson’s essay is (as far as I can see) just arguing against (1) by proposing a criterion according to which a waterfall isn’t playing chess (whereas stockfish is). So, you can agree with him and conclude that (1) is false, but this doesn’t get you very far.
The waterfall thought experiment also doesn’t prove (2). It’s very hard to prove (2) because (2) is just saying that a problem cannot be solved, and it’s hard to prove that anything can’t be done. But the waterfall thought experiment is an argument for (2) by showing that the interpretation problem looks pretty hard. This is how I was using it in my reply to Steven on the first post of this sequence; I didn’t say “and therefore, I’ve proven that no solution to the interpretation problem exists”; I’ve just pointed out that you start off with infinitely many interpretations and currently no one has figured out how to narrow it done to just one, at least not in a way such that the answers have the properties everyone is looking for.
a) computation that actually interacts in a comprehensible way with the real world and
b) computation that has the same internal structure at least momentarily but doesn’t interact meaningfully with the real world.
I expect that (a) can usually be uniquely pinned down to a specific computation (probably in both senses (1) and (2)), while (b) can’t.
But I also think it’s possible that the interactions, while important for establishing the disambiguated computation that we interact with, are not actually crucial to internal experience, so that the multiple possible computations of type (b) may also be associated with internal experiences—similar to Boltzmann brains.
(I think I got this idea from “Good and Real” by Gary L. Drescher. See sections “2.3 The Problematic Arbitrariness of Representation” and “7.2.3 Consciousness and Subjunctive Reciprocity”)
I think a lot of misunderstandings on this topic are because of a lack of clarity about what exact position is being debated/argued for. I think two relevant positions are
(1) it is impossible to say anything meaningful whatsoever about what any system is computing. (You could write this semi-formally as ∀ physical system x∀ computation c1,c2: the claims ”x performs c1” and ”x performs c2” are equally plausible)
(2) it is impossible to have a single, formal, universally applicable rule that tells you which computation a physical system is running that does not produce nonsense results. (I call the problem of finding such a rule the “interpretation problem”, so (2) is saying that the interpretation problem is impossible)
(1) is a much stronger claim and definitely implies (2), but (2) is already sufficient to rule out the type of realist functionalism that OP is attacking. So OP doesn’t have to argue (1). (I’m not sure if they would argue for (1), but they don’t have to in order to make their argument.) And Scott Aaronson’s essay is (as far as I can see) just arguing against (1) by proposing a criterion according to which a waterfall isn’t playing chess (whereas stockfish is). So, you can agree with him and conclude that (1) is false, but this doesn’t get you very far.
The waterfall thought experiment also doesn’t prove (2). It’s very hard to prove (2) because (2) is just saying that a problem cannot be solved, and it’s hard to prove that anything can’t be done. But the waterfall thought experiment is an argument for (2) by showing that the interpretation problem looks pretty hard. This is how I was using it in my reply to Steven on the first post of this sequence; I didn’t say “and therefore, I’ve proven that no solution to the interpretation problem exists”; I’ve just pointed out that you start off with infinitely many interpretations and currently no one has figured out how to narrow it done to just one, at least not in a way such that the answers have the properties everyone is looking for.
You can also disambiguate between
a) computation that actually interacts in a comprehensible way with the real world and
b) computation that has the same internal structure at least momentarily but doesn’t interact meaningfully with the real world.
I expect that (a) can usually be uniquely pinned down to a specific computation (probably in both senses (1) and (2)), while (b) can’t.
But I also think it’s possible that the interactions, while important for establishing the disambiguated computation that we interact with, are not actually crucial to internal experience, so that the multiple possible computations of type (b) may also be associated with internal experiences—similar to Boltzmann brains.
(I think I got this idea from “Good and Real” by Gary L. Drescher. See sections “2.3 The Problematic Arbitrariness of Representation” and “7.2.3 Consciousness and Subjunctive Reciprocity”)