I have no objection to this position. However, it does not imply substrate independence, and strongly suggests its negation.
I disagree, and think that in any case substrate independence is of two types. The directions are: replacing basic units with complex units and replacing complex units with other complex units. Replacing basic units with complex units that do the same thing the basic unit did preserves equations that treated the basic unit as basic. I will attempt to explain.
Consciousness is presumably not a unique property of one specific system. If you’ve been conscious over the course of reading this sentence, multiple physical patterns have been conscious. I am quite different than I was ten years ago and am also quite different than my grandmother and someone living in an uncontacted tribe, also conscious beings. If all humans are conscious, no line between consciousness and non-consciousness will be found within the range of human brain variation.
Whole brains, complex things, can be replaced with giant lookup tables, different complex things, and not have consciousness. The output of “Yes” as an answer to a specific question may be identical between the systems, but the internal computations are different, so it is logically possible that the new computations are not within the wide realm of computations that produce consciousness.
Above I was referring to replacing complex biological units with complex mechanical units, in which “substrate independence” will depend on the specifics of the replacement done. However, all replacement of a unit that is basic with a more complicated unit that will give the same output for each input will leave the conscious system intact as the old equations will not be altered.
For example: suppose that a mechanical system of gears and pulleys produces knives (or consciousness) and clanks. It is possible to replace a gear with a sub-system consisting of: a set of range finders, a computer, mechanical hands, and speakers. The sub-system can measure what surrounding gears are doing and use the hands to spin gears as if the missing gear were in place, and use the speakers to make noises as if the old gear was in place.
Everything produced by the old system will also be produced my the new system, though the new system may also produce something else, such as GTA on the computer. This is because we replaced a basic unit with a more complicated system that produces additional things.
Similarly, replacing biological cells with analogously functional mechanical cells should certainly preserve consciousness. Probably, but not by logical necessity, cells are not needed to produce consciousness as a computed output.
tl;dr: computationalism implies substrate independence insofar as anything upon which computations act may be replaced by anything of any form, with the only requirement being to give the same outputs as the old unit would have. Anything a computation uses by mapping it first may be replaced by anything that would be identically mapped.
Agreed that “replacing biological cells with analogously functional mechanical cells should certainly preserve consciousness,” but this is a very limited sort of substrate “independence”. This approach makes the difficulty of producing an AI with consciousness-as-we-know-it much more severe. Evolution finds local optima, while intelligent design is more flexible, so I expect AI to take off much faster and more successfully, at some point, in a different direction, rather than brain emulation.
Like dfranke, I favor option #2, but like peterdjones, I don’t think it fits under “computationalism”.
This sounds an awful lot like “making the same argument that I am, merely in different vocabulary”. You say po-tay-to, I say po-tah-to, you say “computations”, I say “physical phenomena”. Take the example of the spark-plug brain from my earlier post. If the computer-with-spark-plugs-attached is conscious but the computer alone is not, do you still consider this confirmation of substrate independence? If so, then I think you’re using an even weaker definition of the term than I am. How about xkcd’s desert? If you replace the guy moving the rocks around with a crudely-built robot moving the rocks in the same pattern, do you think it’s plausible that anything in that system experiences human-like consciousness? If you say “no”, then I don’t know whether we’re disagreeing on anything.
making the same argument that I am, merely in different vocabulary
I don’t necessarily understand your argument. Recall I don’t understand one of your questions. I think you disagree with some of my answers to your questions, but you hinted that you don’t think my answers are inconsistent. So I’m really not sure what’s going on.
If the computer-with-spark-plugs-attached is conscious...do you still consider this confirmation of substrate independence?
Not every substance can perform every sub-part role in a consciousness producing computation, so there’s a limit to “independence”. Insofar as it means an entity comprised entirely of non-biological parts can be conscious, which is the usual point of contention, a conscious system made up of a normal computer plus mechanical parts obviously shows that, so I’m not sure what you mean.
To me, what is important is to establish that there’s nothing magical about bio-goo needed for consciousness, and as far as exactly which possible computers are conscious, I don’t know.
If you replace the guy moving the rocks around with a crudely-built robot moving the rocks in the same pattern, do you think it’s plausible that anything in that system experiences human-like consciousness?
What subjective probability would you assign to it?
Not every substance can perform every sub-part role in a consciousness producing computation, so there’s a limit to “independence”. Insofar as it means an entity comprised entirely of non-biological parts can be conscious, which is the usual point of contention, a conscious system made up of a normal computer plus mechanical parts obviously shows that, so I’m not sure what you mean.
I don’t know what the “usual” point of contention is, but this isn’t the one I’m taking a position in opposition to Bostrom on. Look again at my original post and how Bostrom defined substrate-independence and how I paraphrased it. Both Bostrom’s definition and mine mean that xkcd’s desert and certain Giant Look-Up Tables are conscious.
The substrate independence of computation (without regard to consciousness) is well
known, and just means that more than one material system can implement a programme, not that any system can. If consciousness is more “fussy” about its substrate than a typical programme, then in a strict sense, computationalism is false.
(Although AI, which is a broader claim, could still be true).
I disagree, and think that in any case substrate independence is of two types. The directions are: replacing basic units with complex units and replacing complex units with other complex units. Replacing basic units with complex units that do the same thing the basic unit did preserves equations that treated the basic unit as basic. I will attempt to explain.
Consciousness is presumably not a unique property of one specific system. If you’ve been conscious over the course of reading this sentence, multiple physical patterns have been conscious. I am quite different than I was ten years ago and am also quite different than my grandmother and someone living in an uncontacted tribe, also conscious beings. If all humans are conscious, no line between consciousness and non-consciousness will be found within the range of human brain variation.
Whole brains, complex things, can be replaced with giant lookup tables, different complex things, and not have consciousness. The output of “Yes” as an answer to a specific question may be identical between the systems, but the internal computations are different, so it is logically possible that the new computations are not within the wide realm of computations that produce consciousness.
Above I was referring to replacing complex biological units with complex mechanical units, in which “substrate independence” will depend on the specifics of the replacement done. However, all replacement of a unit that is basic with a more complicated unit that will give the same output for each input will leave the conscious system intact as the old equations will not be altered.
For example: suppose that a mechanical system of gears and pulleys produces knives (or consciousness) and clanks. It is possible to replace a gear with a sub-system consisting of: a set of range finders, a computer, mechanical hands, and speakers. The sub-system can measure what surrounding gears are doing and use the hands to spin gears as if the missing gear were in place, and use the speakers to make noises as if the old gear was in place.
Everything produced by the old system will also be produced my the new system, though the new system may also produce something else, such as GTA on the computer. This is because we replaced a basic unit with a more complicated system that produces additional things.
Similarly, replacing biological cells with analogously functional mechanical cells should certainly preserve consciousness. Probably, but not by logical necessity, cells are not needed to produce consciousness as a computed output.
tl;dr: computationalism implies substrate independence insofar as anything upon which computations act may be replaced by anything of any form, with the only requirement being to give the same outputs as the old unit would have. Anything a computation uses by mapping it first may be replaced by anything that would be identically mapped.
Agreed that “replacing biological cells with analogously functional mechanical cells should certainly preserve consciousness,” but this is a very limited sort of substrate “independence”. This approach makes the difficulty of producing an AI with consciousness-as-we-know-it much more severe. Evolution finds local optima, while intelligent design is more flexible, so I expect AI to take off much faster and more successfully, at some point, in a different direction, rather than brain emulation.
Like dfranke, I favor option #2, but like peterdjones, I don’t think it fits under “computationalism”.
This sounds an awful lot like “making the same argument that I am, merely in different vocabulary”. You say po-tay-to, I say po-tah-to, you say “computations”, I say “physical phenomena”. Take the example of the spark-plug brain from my earlier post. If the computer-with-spark-plugs-attached is conscious but the computer alone is not, do you still consider this confirmation of substrate independence? If so, then I think you’re using an even weaker definition of the term than I am. How about xkcd’s desert? If you replace the guy moving the rocks around with a crudely-built robot moving the rocks in the same pattern, do you think it’s plausible that anything in that system experiences human-like consciousness? If you say “no”, then I don’t know whether we’re disagreeing on anything.
I don’t necessarily understand your argument. Recall I don’t understand one of your questions. I think you disagree with some of my answers to your questions, but you hinted that you don’t think my answers are inconsistent. So I’m really not sure what’s going on.
Not every substance can perform every sub-part role in a consciousness producing computation, so there’s a limit to “independence”. Insofar as it means an entity comprised entirely of non-biological parts can be conscious, which is the usual point of contention, a conscious system made up of a normal computer plus mechanical parts obviously shows that, so I’m not sure what you mean.
To me, what is important is to establish that there’s nothing magical about bio-goo needed for consciousness, and as far as exactly which possible computers are conscious, I don’t know.
Plausible? What does that mean, exactly?
What subjective probability would you assign to it?
I don’t know what the “usual” point of contention is, but this isn’t the one I’m taking a position in opposition to Bostrom on. Look again at my original post and how Bostrom defined substrate-independence and how I paraphrased it. Both Bostrom’s definition and mine mean that xkcd’s desert and certain Giant Look-Up Tables are conscious.
The substrate independence of computation (without regard to consciousness) is well known, and just means that more than one material system can implement a programme, not that any system can. If consciousness is more “fussy” about its substrate than a typical programme, then in a strict sense, computationalism is false. (Although AI, which is a broader claim, could still be true).