making the same argument that I am, merely in different vocabulary
I don’t necessarily understand your argument. Recall I don’t understand one of your questions. I think you disagree with some of my answers to your questions, but you hinted that you don’t think my answers are inconsistent. So I’m really not sure what’s going on.
If the computer-with-spark-plugs-attached is conscious...do you still consider this confirmation of substrate independence?
Not every substance can perform every sub-part role in a consciousness producing computation, so there’s a limit to “independence”. Insofar as it means an entity comprised entirely of non-biological parts can be conscious, which is the usual point of contention, a conscious system made up of a normal computer plus mechanical parts obviously shows that, so I’m not sure what you mean.
To me, what is important is to establish that there’s nothing magical about bio-goo needed for consciousness, and as far as exactly which possible computers are conscious, I don’t know.
If you replace the guy moving the rocks around with a crudely-built robot moving the rocks in the same pattern, do you think it’s plausible that anything in that system experiences human-like consciousness?
What subjective probability would you assign to it?
Not every substance can perform every sub-part role in a consciousness producing computation, so there’s a limit to “independence”. Insofar as it means an entity comprised entirely of non-biological parts can be conscious, which is the usual point of contention, a conscious system made up of a normal computer plus mechanical parts obviously shows that, so I’m not sure what you mean.
I don’t know what the “usual” point of contention is, but this isn’t the one I’m taking a position in opposition to Bostrom on. Look again at my original post and how Bostrom defined substrate-independence and how I paraphrased it. Both Bostrom’s definition and mine mean that xkcd’s desert and certain Giant Look-Up Tables are conscious.
The substrate independence of computation (without regard to consciousness) is well
known, and just means that more than one material system can implement a programme, not that any system can. If consciousness is more “fussy” about its substrate than a typical programme, then in a strict sense, computationalism is false.
(Although AI, which is a broader claim, could still be true).
I don’t necessarily understand your argument. Recall I don’t understand one of your questions. I think you disagree with some of my answers to your questions, but you hinted that you don’t think my answers are inconsistent. So I’m really not sure what’s going on.
Not every substance can perform every sub-part role in a consciousness producing computation, so there’s a limit to “independence”. Insofar as it means an entity comprised entirely of non-biological parts can be conscious, which is the usual point of contention, a conscious system made up of a normal computer plus mechanical parts obviously shows that, so I’m not sure what you mean.
To me, what is important is to establish that there’s nothing magical about bio-goo needed for consciousness, and as far as exactly which possible computers are conscious, I don’t know.
Plausible? What does that mean, exactly?
What subjective probability would you assign to it?
I don’t know what the “usual” point of contention is, but this isn’t the one I’m taking a position in opposition to Bostrom on. Look again at my original post and how Bostrom defined substrate-independence and how I paraphrased it. Both Bostrom’s definition and mine mean that xkcd’s desert and certain Giant Look-Up Tables are conscious.
The substrate independence of computation (without regard to consciousness) is well known, and just means that more than one material system can implement a programme, not that any system can. If consciousness is more “fussy” about its substrate than a typical programme, then in a strict sense, computationalism is false. (Although AI, which is a broader claim, could still be true).