To explain an observation, you must describe it, and the language used for describing conscious experience is the same language used for describing the object the experience refers to.
Generally the way to handle such problems is to discuss them at a meta-level and then formalize the similarity between the meta-level and the problem itself. Godel first described in natural language a formal model for writing down logical sentences about arithmetic and encoding them as integers and an arithmetic algorithm for manipulating these sentences and their proofs and then formalized his description of the system as an integer that could be interpreted by the formal model itself, demonstrating that logical systems can speak about themselves.
Similarly I can say “I experience consciousness” and then tell you how your brain uses neurons to turn that sentence into a bunch of neural impulses that you understand. Then I can take the sentence “I can say ‘I experience consciousness’ and then tell you how your brain uses neurons to turn that sentence into a bunch of neural impulses that you understand.” (EDIT: with ‘tell you how your brain uses neurons’ actually replaced with the physical description of what happens) and encode it as the very set of neural impulses actually produced by your brain when you heard the first sentence and the similarity between reality and the meta-level is complete. The description of how your brain turns neural impulses into an understanding of a statement about consciousness is now encoded in the very language of those neural impulses, assuming that neural impulses can actually be quined in this way. Our brains may not have complex enough connections to consciously hold the model of the portion of our brains that we use to understand models, but it is possible in theory. If it’s possible then you can then think about exactly how your brain turns a sentence about consciousness into meaning and it will exactly mirror the actual process your brain used to turn the sentence into the meaning you experience.
I can’t think of a better way to understand consciousness.
you can then think about exactly how your brain turns a sentence about consciousness into meaning and it will exactly mirror the actual process your brain used to turn the sentence into the meaning you experience.
We don’t experience meanings. An organism without qualia could—without contradiction—grasp the meaning of a sentence.
We don’t experience meanings. An organism without qualia could—without contradiction—grasp the meaning of a sentence.
Ah, I did misunderstand you when I read the post. My point is that descriptions of neurons and neural interactions is the correct language to talk about conscious experience and qualia. Consider the following sentence instead: “When I see the color red, this is the neural result it has on my brain. When you see the color red, this is the neural result it has on your brain.” Depending on how similar our brains are we may be able to come to a consensus on which neural processes implement the qualia “red” and decide whether my “red” is also your “red”, while also allowing us to both understand how “red” is implemented in our own brains. I think we will probably need to understand our own conscious experience before being able to compare specific qualia.
Generally the way to handle such problems is to discuss them at a meta-level and then formalize the similarity between the meta-level and the problem itself. Godel first described in natural language a formal model for writing down logical sentences about arithmetic and encoding them as integers and an arithmetic algorithm for manipulating these sentences and their proofs and then formalized his description of the system as an integer that could be interpreted by the formal model itself, demonstrating that logical systems can speak about themselves.
Similarly I can say “I experience consciousness” and then tell you how your brain uses neurons to turn that sentence into a bunch of neural impulses that you understand. Then I can take the sentence “I can say ‘I experience consciousness’ and then tell you how your brain uses neurons to turn that sentence into a bunch of neural impulses that you understand.” (EDIT: with ‘tell you how your brain uses neurons’ actually replaced with the physical description of what happens) and encode it as the very set of neural impulses actually produced by your brain when you heard the first sentence and the similarity between reality and the meta-level is complete. The description of how your brain turns neural impulses into an understanding of a statement about consciousness is now encoded in the very language of those neural impulses, assuming that neural impulses can actually be quined in this way. Our brains may not have complex enough connections to consciously hold the model of the portion of our brains that we use to understand models, but it is possible in theory. If it’s possible then you can then think about exactly how your brain turns a sentence about consciousness into meaning and it will exactly mirror the actual process your brain used to turn the sentence into the meaning you experience.
I can’t think of a better way to understand consciousness.
We don’t experience meanings. An organism without qualia could—without contradiction—grasp the meaning of a sentence.
Ah, I did misunderstand you when I read the post. My point is that descriptions of neurons and neural interactions is the correct language to talk about conscious experience and qualia. Consider the following sentence instead: “When I see the color red, this is the neural result it has on my brain. When you see the color red, this is the neural result it has on your brain.” Depending on how similar our brains are we may be able to come to a consensus on which neural processes implement the qualia “red” and decide whether my “red” is also your “red”, while also allowing us to both understand how “red” is implemented in our own brains. I think we will probably need to understand our own conscious experience before being able to compare specific qualia.