Sort of. I consider the stuff about the ‘meta-hard problem’, aka providing a mechanical account of an agent that would report having non-mechanically-explicable qualia, to be more fundamental. Then the postulation of consciousness as basic is one possible way of then relating that to your own experiences. (Also, I wouldn’t say that consciousness is a ‘building block of reality’ in the same way that quarks are. Asking if consciousness is physically real is not a question with a true/false answer, it’s a type error within a system that relates world-models to experiences)
Relating this meta-theory to other minds and morality is somewhat trickier. I’d say that the theory in this post already provides a plausible account of which other cognitive systems will report having mechanically-explicable qualia(and thus provides as close of an answer to “which systems are conscious” as we’re going to get) On the brain side, I think this is implemented intuitively by seeing which parts of the external world can be modeled by re-using part of your brain to simulate them, then providing a build-in suite of social emotions towards such things. This can probably be extrapolated to a more general theory of morality towards entities with a mind architecture similar to ours(thus providing as close as an answer as we’re going to get to ‘which physical systems have positive or negative experiences?’)
Sort of. I consider the stuff about the ‘meta-hard problem’, aka providing a mechanical account of an agent that would report having non-mechanically-explicable qualia, to be more fundamental. Then the postulation of consciousness as basic is one possible way of then relating that to your own experiences. (Also, I wouldn’t say that consciousness is a ‘building block of reality’ in the same way that quarks are. Asking if consciousness is physically real is not a question with a true/false answer, it’s a type error within a system that relates world-models to experiences)
Relating this meta-theory to other minds and morality is somewhat trickier. I’d say that the theory in this post already provides a plausible account of which other cognitive systems will report having mechanically-explicable qualia(and thus provides as close of an answer to “which systems are conscious” as we’re going to get) On the brain side, I think this is implemented intuitively by seeing which parts of the external world can be modeled by re-using part of your brain to simulate them, then providing a build-in suite of social emotions towards such things. This can probably be extrapolated to a more general theory of morality towards entities with a mind architecture similar to ours(thus providing as close as an answer as we’re going to get to ‘which physical systems have positive or negative experiences?’)