Is there a short explanation of why I ought to reject an analogous theory that algorithms are non-physical properties that supervene on the physical properties of systems that implement those algorithms?
Or, actually, backing up… ought I reject such a theory, from Chalmer et al’s perspective? Or is “1+1=2” a nonphysical property of certain systems (say, two individual apples placed alongside each other) in the same sense that “red” is?
Is there a short explanation of why I ought to reject an analogous theory that algorithms are non-physical properties that supervene on the physical properties of systems that implement those algorithms?
Yes: algorithms are entirely predictable from, and understandable in terms of, their physical realisations.
Yes: algorithms are entirely predictable from, and understandable in terms of, their physical realisations.
Now I’m confused: what you just said is a description of a ‘supervenient’ relation. Are you saying that anytime X is said to supervene on Y, we should reject the theory which features X’s?
No. Supervence is an ontologically neutral relationship. In Chalmer’s theory, qualia supervene on brain states,
so novel brain states will lead to novvel qualia. In identity theory, qualia superven on brain states, so ditto. So
the Novel Qualia test does not distinguish the one from the other. The argument for qualia being non-physical
properties, as opposed to algorithms, is down to their redubility, or lack thereof, not supervenience.
Yes: algorithms are entirely predictable from, and understandable in terms of, their physical realisations.
This is not really true, at least without adding some pretty restrictive conditions. By using “joke interpretations”, as pointed out by Searle and Putnam, one could assert that a huge number of “algorithms” supervene on any large-enough physical object.
I mean, sure, the fact that a circuit implementing the algorithm “1+1=2” returns “2″ given the instruction to execute “1+1” is entirely predictable, much as the fact that a mouse conditioned to avoid red will avoid a red room is predictable. Absolutely agreed.
But as I understand the idea of qualia, the claim is that the mouse’s predictable behavior with respect to a red room (and the neural activity that gives rise to it) is not a complete description of what’s going on… there is also the mouse’s experience of red, which is an entirely separate, nonphysical, fact about the event, which cannot be explained by current physics even in principle. (Or maybe it turns out mice don’t have an experience of red, but humans certainly do, or at least I certainly do.) Right?
Which, OK. But I also have the experience of seeing two things, just like I have the experience of seeing a red thing. On what basis do I justify the claim that that experience is completely described by a description of the physical system that calculates “2”? How do I know that my experience of 2 isn’t an entirely separate nonphysical fact about the event which cannot be explained by current physics even in principle?
Is there a short explanation of why I ought to reject an analogous theory that algorithms are non-physical properties that supervene on the physical properties of systems that implement those algorithms?
Or, actually, backing up… ought I reject such a theory, from Chalmer et al’s perspective? Or is “1+1=2” a nonphysical property of certain systems (say, two individual apples placed alongside each other) in the same sense that “red” is?
Yes: algorithms are entirely predictable from, and understandable in terms of, their physical realisations.
Now I’m confused: what you just said is a description of a ‘supervenient’ relation. Are you saying that anytime X is said to supervene on Y, we should reject the theory which features X’s?
No. Supervence is an ontologically neutral relationship. In Chalmer’s theory, qualia supervene on brain states, so novel brain states will lead to novvel qualia. In identity theory, qualia superven on brain states, so ditto. So the Novel Qualia test does not distinguish the one from the other. The argument for qualia being non-physical properties, as opposed to algorithms, is down to their redubility, or lack thereof, not supervenience.
This is not really true, at least without adding some pretty restrictive conditions. By using “joke interpretations”, as pointed out by Searle and Putnam, one could assert that a huge number of “algorithms” supervene on any large-enough physical object.
Are they?
I mean, sure, the fact that a circuit implementing the algorithm “1+1=2” returns “2″ given the instruction to execute “1+1” is entirely predictable, much as the fact that a mouse conditioned to avoid red will avoid a red room is predictable. Absolutely agreed.
But as I understand the idea of qualia, the claim is that the mouse’s predictable behavior with respect to a red room (and the neural activity that gives rise to it) is not a complete description of what’s going on… there is also the mouse’s experience of red, which is an entirely separate, nonphysical, fact about the event, which cannot be explained by current physics even in principle. (Or maybe it turns out mice don’t have an experience of red, but humans certainly do, or at least I certainly do.) Right?
Which, OK. But I also have the experience of seeing two things, just like I have the experience of seeing a red thing. On what basis do I justify the claim that that experience is completely described by a description of the physical system that calculates “2”? How do I know that my experience of 2 isn’t an entirely separate nonphysical fact about the event which cannot be explained by current physics even in principle?