If I understand you right, basically, you say that once we postulate consciousness as some basic, irreducible building block of reality, confusion related to consciousness will evaporate. Maybe it will help partially, but I think it will not solve problem completely. Why? Let’s say that consciousness is some terminal node in our world-model, this still leaves the question “What systems in word are conscious?”. And I guess that current hypotheses for answer to this question are rather confusing. We didn’t have same level of confusion with other models of basic building blocks. For example, with atoms we thought “yup, everything is an atom, to build this rock we need these atoms and for the cat—other”, then with quantum configurations we think “OK, universe is one gigantic configuration, the rock is this factor and this cat is other” etc and it doesn’t seem very unintuitive (even if the process of producing these factors is hard, it is know ‘in principle’), but with consciousness we don’t know (even in principle!) how to measure of consciousness in any particular system and that’s, IMHO, the important difference.
Sort of. I consider the stuff about the ‘meta-hard problem’, aka providing a mechanical account of an agent that would report having non-mechanically-explicable qualia, to be more fundamental. Then the postulation of consciousness as basic is one possible way of then relating that to your own experiences. (Also, I wouldn’t say that consciousness is a ‘building block of reality’ in the same way that quarks are. Asking if consciousness is physically real is not a question with a true/false answer, it’s a type error within a system that relates world-models to experiences)
Relating this meta-theory to other minds and morality is somewhat trickier. I’d say that the theory in this post already provides a plausible account of which other cognitive systems will report having mechanically-explicable qualia(and thus provides as close of an answer to “which systems are conscious” as we’re going to get) On the brain side, I think this is implemented intuitively by seeing which parts of the external world can be modeled by re-using part of your brain to simulate them, then providing a build-in suite of social emotions towards such things. This can probably be extrapolated to a more general theory of morality towards entities with a mind architecture similar to ours(thus providing as close as an answer as we’re going to get to ‘which physical systems have positive or negative experiences?’)
If I understand you right, basically, you say that once we postulate consciousness as some basic, irreducible building block of reality, confusion related to consciousness will evaporate. Maybe it will help partially, but I think it will not solve problem completely. Why? Let’s say that consciousness is some terminal node in our world-model, this still leaves the question “What systems in word are conscious?”. And I guess that current hypotheses for answer to this question are rather confusing. We didn’t have same level of confusion with other models of basic building blocks. For example, with atoms we thought “yup, everything is an atom, to build this rock we need these atoms and for the cat—other”, then with quantum configurations we think “OK, universe is one gigantic configuration, the rock is this factor and this cat is other” etc and it doesn’t seem very unintuitive (even if the process of producing these factors is hard, it is know ‘in principle’), but with consciousness we don’t know (even in principle!) how to measure of consciousness in any particular system and that’s, IMHO, the important difference.
Sort of. I consider the stuff about the ‘meta-hard problem’, aka providing a mechanical account of an agent that would report having non-mechanically-explicable qualia, to be more fundamental. Then the postulation of consciousness as basic is one possible way of then relating that to your own experiences. (Also, I wouldn’t say that consciousness is a ‘building block of reality’ in the same way that quarks are. Asking if consciousness is physically real is not a question with a true/false answer, it’s a type error within a system that relates world-models to experiences)
Relating this meta-theory to other minds and morality is somewhat trickier. I’d say that the theory in this post already provides a plausible account of which other cognitive systems will report having mechanically-explicable qualia(and thus provides as close of an answer to “which systems are conscious” as we’re going to get) On the brain side, I think this is implemented intuitively by seeing which parts of the external world can be modeled by re-using part of your brain to simulate them, then providing a build-in suite of social emotions towards such things. This can probably be extrapolated to a more general theory of morality towards entities with a mind architecture similar to ours(thus providing as close as an answer as we’re going to get to ‘which physical systems have positive or negative experiences?’)