Well, unlike a fundamental theory of physics, we don’t have strong reasons to expect that consciousness is indescribable in any more basic terms. I think there’s a confusion of levels here… GR is a description of how a 4-dimensional spacetime can function and precisely reproduces our observations of the universe. It doesn’t describe how that spacetime was born into existence because that’s an answer to a different question than the one Einstein was asking.
In the case of consciousness, there are many things we don’t know, such as:
1: Can we rigorously draw a boundary around this concept of “consciousness” in concept-space in a way that captures all the features we think it should have, and still makes logical sense as a compact description
2: Can we use a compact description like that to distinguish empirically between systems that are and are not “conscious”
3: Can we use a theory of consciousness to design a mechanism that will have a conscious subjective
experience
It’s quite possible that answering 1 will make 2 obvious, and if the answer to 2 is “yes”, then it’s likely that it will make 3 a matter of engineering. It seems likely that a theory of consciousness will be built on top of the more well-understood knowledge base of computer science, and so it should be describable in basic terms if it’s not a completely incoherent concept. And if it is a completely incoherent concept, then we should expect an answer instead from cognitive science to tell us why humans generally seem to feel strongly that consciousness is a coherent concept, even though it actually is not.
Well, unlike a fundamental theory of physics, we don’t have strong reasons to expect that consciousness is indescribable in any more basic terms. I think there’s a confusion of levels here… GR is a description of how a 4-dimensional spacetime can function and precisely reproduces our observations of the universe. It doesn’t describe how that spacetime was born into existence because that’s an answer to a different question than the one Einstein was asking.
In the case of consciousness, there are many things we don’t know, such as:
1: Can we rigorously draw a boundary around this concept of “consciousness” in concept-space in a way that captures all the features we think it should have, and still makes logical sense as a compact description
2: Can we use a compact description like that to distinguish empirically between systems that are and are not “conscious”
3: Can we use a theory of consciousness to design a mechanism that will have a conscious subjective experience
It’s quite possible that answering 1 will make 2 obvious, and if the answer to 2 is “yes”, then it’s likely that it will make 3 a matter of engineering. It seems likely that a theory of consciousness will be built on top of the more well-understood knowledge base of computer science, and so it should be describable in basic terms if it’s not a completely incoherent concept. And if it is a completely incoherent concept, then we should expect an answer instead from cognitive science to tell us why humans generally seem to feel strongly that consciousness is a coherent concept, even though it actually is not.