I once took this reductio in the opposite direction and ended up becoming convinced that consciousness is what it feels like inside a logically consistent description of a mind-state, whether or not it is instantiated anywhere. I’m still confused about some of the implications of this, but somewhat less confused about consciousness itself.
Take a moment to convince yourself that there is nothing substantively different between this scenario and the previous one, except that it contains approximately 10,000 times the maximum safe dosage of in principle.
Once again, Simone will claim she’s conscious.
...Yeah, I’m sorry, but I just don’t believe her.
I don’t claim certain knowledge about the ontology of consciousness, but if I can summon forth a subjective consciousness ex nihilo by making the right series of graphite squiggles (which don’t even mean anything outside human minds), then we might as well just give up and admit consciousness is magic.
“If I can summon forth a subjective consciousness ex nihilo by making the right blobs of protein throw around the right patterns of electrical impulses and neurotransmitters (which don’t even mean anything outside human minds), then we might as well just give up and admit consciousness is magic.”
Remember that it doesn’t count as a reductio ad absurdum unless the conclusion is logically impossible (or, for the Bayesian analogue, very improbable according to some actual calculation) rather than merely implausible-sounding. I’d rather take Simone’s word for it than believe my intuitions about plausibility.
Doesn’t this imply that an infinity of different subjective consciousnesses are being simulated right now, if only we knew how to assign inputs and outputs correctly?
I started a series of articles, which got some criticism on LW in the past, dealing with this issue (among others) and this kind of ontology. In short, if an ontology like this applies, it does not mean that all computations are equal: There would be issues of measure associated with the number (I’m simplifying here) of interpretations that can find any particular computation. I expect to be posting Part 4 of this series, which has been delayed for a long time and which will answer many objections, in a while, but the previous articles are as follows:
This relates to the notion of “joke interpretations” under which a rock can be said to be implementing a given algorithm. There’s some discussion of it in Good and Real.
Yes, it does. And if the universe is spatially infinite, then that implies an infinity of different subjective consciousnesses, too. Neither of these seems like a problem to me.
Not necessarily. See Chlamer’s reply to Hilary Putnam who asserted something similar, especially section 6. Basically, if we require that all of the “internal” structure of the computation be the same in the isomorphism and make a reasonable assumption about the nature consciousness, all of the matter in the Hubble volume wouldn’t be close to large enough to simulate a (human) consciousness.
I once took this reductio in the opposite direction and ended up becoming convinced that consciousness is what it feels like inside a logically consistent description of a mind-state, whether or not it is instantiated anywhere.
Do you think the world outside your body is still there when you’re asleep? That objects are still there when you close your eyes?
I once took this reductio in the opposite direction and ended up becoming convinced that consciousness is what it feels like inside a logically consistent description of a mind-state, whether or not it is instantiated anywhere. I’m still confused about some of the implications of this, but somewhat less confused about consciousness itself.
“If I can summon forth a subjective consciousness ex nihilo by making the right blobs of protein throw around the right patterns of electrical impulses and neurotransmitters (which don’t even mean anything outside human minds), then we might as well just give up and admit consciousness is magic.”
Remember that it doesn’t count as a reductio ad absurdum unless the conclusion is logically impossible (or, for the Bayesian analogue, very improbable according to some actual calculation) rather than merely implausible-sounding. I’d rather take Simone’s word for it than believe my intuitions about plausibility.
Doesn’t this imply that an infinity of different subjective consciousnesses are being simulated right now, if only we knew how to assign inputs and outputs correctly?
I started a series of articles, which got some criticism on LW in the past, dealing with this issue (among others) and this kind of ontology. In short, if an ontology like this applies, it does not mean that all computations are equal: There would be issues of measure associated with the number (I’m simplifying here) of interpretations that can find any particular computation. I expect to be posting Part 4 of this series, which has been delayed for a long time and which will answer many objections, in a while, but the previous articles are as follows:
Minds, Substrate, Measure and Value, Part 1: Substrate Dependence. http://www.paul-almond.com/Substrate1.pdf.
Minds, Substrate, Measure and Value, Part 2: Extra Information About Substrate Dependence. http://www.paul-almond.com/Substrate2.pdf.
Minds, Substrate, Measure and Value, Part 3: The Problem of Arbitrariness of Interpretation. http://www.paul-almond.com/Substrate3.pdf.
This won’t resolve everything, but should show that the kind of ontology you are talking about is not a “random free for all”.
This relates to the notion of “joke interpretations” under which a rock can be said to be implementing a given algorithm. There’s some discussion of it in Good and Real.
Yes, it does. And if the universe is spatially infinite, then that implies an infinity of different subjective consciousnesses, too. Neither of these seems like a problem to me.
Not necessarily. See Chlamer’s reply to Hilary Putnam who asserted something similar, especially section 6. Basically, if we require that all of the “internal” structure of the computation be the same in the isomorphism and make a reasonable assumption about the nature consciousness, all of the matter in the Hubble volume wouldn’t be close to large enough to simulate a (human) consciousness.
Do you think the world outside your body is still there when you’re asleep? That objects are still there when you close your eyes?
This.