Do I recall correctly that Gary Drescher also uses the ‘what information processing feels like from the inside’ view of consciousness, and that Eliezer thought it was at least a good insight?
I’ve been warming to the idea as a useful insight, but I’m still pretty confused; it feels like there’s a useful (but possibly quantitative and not qualitative) difference between myself (obviously ‘conscious’ for any coherent extrapolated meaning of the term) and my computer (obviously not conscious (to any significant extent?)), which is not accounted for by saying merely that consciousness is the feeling of information processing.
I think the idea of consciousness might really be a confused suite arising from real, more fundamental ingredients including the feeling of information being processed. So maybe it’s more like ‘the properties of an information processor give rise (possibly in combination with other things) to things we refer to by ‘consciousness″. I’m struggling to think of cases where we can’t (at least in principle) taboo consciousness and instead talk about more specific things that we know to refer non-confusedly to actual things. And saying ‘Consciousness is X’ seems to take consciousness too seriously as a useful or meaningful or coherent concept.
(I guess consciousness is often treated as a fundamental ethical consideration that cannot be reduced any further, but I am skeptical of the idea that consciousness is fundamental to ethics per se, and extremely suspicious of ethical considerations that have not been shown reducible to ‘selfishness’+game/decision theory.)
I think there’s a notable probability of the disjunction that consciousness is meaningless enough that any attempt to reduce it as much as Tegmark tries is misguided or that it is possible in non-quantum models and Tegmark’s approach (even if it is incomplete or partially incorrect) generalises.
The integrated information theory (IIT) of consciousness claims that, at the fundamental level, consciousness is integrated information, and that its quality is given by the informational relationships generated by a complex of elements (Tononi, 2004).
This theory, that is in the background of the Tegmark paper, allows for different qualities of consciousness. The informational relationships in your computer are vastly simpler than those in your brain … so the quality of consciousness would be correspondingly poor.
I was digging in the references because I thought ‘consciousness’ meant ‘self-awareness’ and I was confused about the direction of the discussion. Now I know that consciousness is about experience (e.g., the experience of seeing a color, or hearing a sound, etc) and the quality of that experience.
Roughly guessing from the Tononi paper, which is beyond my ken, the “the informational relationships generated by a complex of elements” can be so complex, new mathematics or physics is required to characterize the complexity and topology of these relationships.
This is a thought experiment about a photodiode and a digital camera that is helpful in explaining the difference in complexity in information integration:
I’ve been warming to the idea as a useful insight, but I’m still pretty confused; it feels like there’s a useful (but possibly quantitative and not qualitative) difference between myself (obviously ‘conscious’ for any coherent extrapolated meaning of the term) and my computer (obviously not conscious (to any significant extent?)), which is not accounted for by saying merely that consciousness is the feeling of information processing.
Why do you think your computer is not conscious? It probably has more of a conscious experience than, say, a flatworm or sea urchin. (As byrnema notes, conscious does not necessarily imply self-aware here.)
Thanks for posting this.
Do I recall correctly that Gary Drescher also uses the ‘what information processing feels like from the inside’ view of consciousness, and that Eliezer thought it was at least a good insight?
I’ve been warming to the idea as a useful insight, but I’m still pretty confused; it feels like there’s a useful (but possibly quantitative and not qualitative) difference between myself (obviously ‘conscious’ for any coherent extrapolated meaning of the term) and my computer (obviously not conscious (to any significant extent?)), which is not accounted for by saying merely that consciousness is the feeling of information processing.
I think the idea of consciousness might really be a confused suite arising from real, more fundamental ingredients including the feeling of information being processed. So maybe it’s more like ‘the properties of an information processor give rise (possibly in combination with other things) to things we refer to by ‘consciousness″. I’m struggling to think of cases where we can’t (at least in principle) taboo consciousness and instead talk about more specific things that we know to refer non-confusedly to actual things. And saying ‘Consciousness is X’ seems to take consciousness too seriously as a useful or meaningful or coherent concept.
(I guess consciousness is often treated as a fundamental ethical consideration that cannot be reduced any further, but I am skeptical of the idea that consciousness is fundamental to ethics per se, and extremely suspicious of ethical considerations that have not been shown reducible to ‘selfishness’+game/decision theory.)
I think there’s a notable probability of the disjunction that consciousness is meaningless enough that any attempt to reduce it as much as Tegmark tries is misguided or that it is possible in non-quantum models and Tegmark’s approach (even if it is incomplete or partially incorrect) generalises.
This theory, that is in the background of the Tegmark paper, allows for different qualities of consciousness. The informational relationships in your computer are vastly simpler than those in your brain … so the quality of consciousness would be correspondingly poor.
I was digging in the references because I thought ‘consciousness’ meant ‘self-awareness’ and I was confused about the direction of the discussion. Now I know that consciousness is about experience (e.g., the experience of seeing a color, or hearing a sound, etc) and the quality of that experience.
Roughly guessing from the Tononi paper, which is beyond my ken, the “the informational relationships generated by a complex of elements” can be so complex, new mathematics or physics is required to characterize the complexity and topology of these relationships.
This is a thought experiment about a photodiode and a digital camera that is helpful in explaining the difference in complexity in information integration:
Skip down to the section Information: the photodiode thought experiment.
Why do you think your computer is not conscious? It probably has more of a conscious experience than, say, a flatworm or sea urchin. (As byrnema notes, conscious does not necessarily imply self-aware here.)