Wrt necessitating an “algorithms” view for q5… maybe. My initial answer there was to observe confusion, either in myself or the question, precisely in the area you point out here: it’s unclear how the labels “input” and “output” map to anything we’re talking about. I don’t reject your proposed mapping, but I don’t find it especially compelling either. I’m not sure that those labels necessarily do mean anything, actually.
Wrt not implying substrate independence: sure, I agree in principle; it’s not impossible that only protoplasmic substrates can implement consciousness. All I’m saying is that if that turns out to be true, it will be because certain kinds of computations can only be performed on protoplasmic machines.
Similarly, to say that heavier-than-air flight is a property of certain mechanical operations doesn’t imply substrate-independence for flight; it might be true that those mechanical operations can only be performed by protoplasmic machines.
That said, that would be a surprising result in both cases. Once we built/discovered a heavier-than-air nonprotoplasmic flying machine, the idea that doing so was impossible was rightly discarded; I expect something similar to happen with nonprotoplasmic consciousnesses.
As for strongly implying the absence of substrate-independence: sure, in the strict sense you mean it here, that’s true. Change the substrate and there will always be some difference, even if it turns out to be a difference you-the-observer could not conceivably care less about.
I suppose I could say my understanding of substrate-independence is implicitly a 2-place predicate: system S is subtrate-independent with respect to observer O iff O considers some system S2 identical to S, where S is implemented on a different substrate than S2.
A 1-place version, I agree, is unlikely on my view (its negation is, as you say, strongly suggested). I would also say that time-independence (that is, the idea that my consciousness is precisely the same from one moment to the next) is equally unlikely. I would also say that neither of these things matters very much.
Wrt not implying substrate independence: sure, I agree in principle; it’s not impossible that only protoplasmic substrates can implement consciousness. All I’m saying is that if that turns out to be true, it will be because certain kinds of computations can only be performed on protoplasmic machines.
Physicalists can reject substrate independence and accept the Church-Turing thesis, while still taking consciousness seriously. One can argue that consciousness in the physical world is implemented on protoplasm, and that this is the only kind of consciousness which is directly experienced. The fact that conscious beings can be simulated on a computer would be true but irrelevant.
Wrt not implying substrate independence: sure, I agree in principle; it’s not >impossible that only protoplasmic substrates can implement consciousness. All I’m
saying is that if that turns out to be true, it will be because certain kinds of
computations can only be performed on protoplasmic machines.
That is false, since we can build Universal Turing Machines (up to a certain finite memory) out of non-protoplasm, and a UTM can compute anything.
I suppose I could say my understanding of substrate-independence is implicitly a >2-place predicate: system S is subtrate-independent with respect to observer O iff O >considers some system S2 identical to S, where S is implemented on a different >substrate than S2.
An observer-relative notion of computation is problematic for a computational
theory of consc, since an observer-relative notion of consciousness is
problematics. Surely the point is that I know i am conscious, not that he thinks
I am.
I’ll accept option #2 as close enough to my view.
Wrt necessitating an “algorithms” view for q5… maybe. My initial answer there was to observe confusion, either in myself or the question, precisely in the area you point out here: it’s unclear how the labels “input” and “output” map to anything we’re talking about. I don’t reject your proposed mapping, but I don’t find it especially compelling either. I’m not sure that those labels necessarily do mean anything, actually.
Wrt not implying substrate independence: sure, I agree in principle; it’s not impossible that only protoplasmic substrates can implement consciousness. All I’m saying is that if that turns out to be true, it will be because certain kinds of computations can only be performed on protoplasmic machines.
Similarly, to say that heavier-than-air flight is a property of certain mechanical operations doesn’t imply substrate-independence for flight; it might be true that those mechanical operations can only be performed by protoplasmic machines.
That said, that would be a surprising result in both cases. Once we built/discovered a heavier-than-air nonprotoplasmic flying machine, the idea that doing so was impossible was rightly discarded; I expect something similar to happen with nonprotoplasmic consciousnesses.
As for strongly implying the absence of substrate-independence: sure, in the strict sense you mean it here, that’s true. Change the substrate and there will always be some difference, even if it turns out to be a difference you-the-observer could not conceivably care less about.
I suppose I could say my understanding of substrate-independence is implicitly a 2-place predicate: system S is subtrate-independent with respect to observer O iff O considers some system S2 identical to S, where S is implemented on a different substrate than S2.
A 1-place version, I agree, is unlikely on my view (its negation is, as you say, strongly suggested). I would also say that time-independence (that is, the idea that my consciousness is precisely the same from one moment to the next) is equally unlikely. I would also say that neither of these things matters very much.
Physicalists can reject substrate independence and accept the Church-Turing thesis, while still taking consciousness seriously. One can argue that consciousness in the physical world is implemented on protoplasm, and that this is the only kind of consciousness which is directly experienced. The fact that conscious beings can be simulated on a computer would be true but irrelevant.
Physcalists can’t reject substrae independence and accept the Computational Theory of Mind, however.
That is false, since we can build Universal Turing Machines (up to a certain finite memory) out of non-protoplasm, and a UTM can compute anything.
An observer-relative notion of computation is problematic for a computational theory of consc, since an observer-relative notion of consciousness is problematics. Surely the point is that I know i am conscious, not that he thinks I am.
You have a proof of the Church-Turing thesis? You should write it up and become famous in the CS community!
The other guy needs a disproof of the CTT...an effective procedure that can only be computed in protoplasm.