Funny that you should mention élan vital. The more I read about it, the more “consciousness” seems to me to be similarly incoherent and pseudoscientific as vitalism. This isn’t a fringe view and I’d recommend skimming the Rejection of the Problem section of the Hard problem of consciousness page on Wikipedia for additional context. It’s hard not to be confused about a term that isn’t coherent to begin with.
Supposing each scenario could be definitively classified as conscious or not, would that help you make any predictions about the world?
The cognitive system embedded within the body that is writing now (‘me’) sometimes registers certain things (‘feelings’) and sometimes doesn’t, I call the first “being conscious” and the second “not being conscious”. Then I notice that not all of the things that my body’s sensory systems register are registered as ‘conscious feelings’ all of the time (even while conscious), and that some people even report not being aware of their own sensorial perception of things like vision.
Whatever thing causes that difference in which things get recorded is what I call ‘consciousness’. Now I ask how that works.
Supposing each scenario could be definitively classified as conscious or not, would that help you make any predictions about the world?
Presumably, that it has the type of cognitive structures that allow an entity to feel (and maybe report) consistently feelings about the same sensory inputs in similar contexts.
I don’t know how well our intuition about ‘consciousness’ tracks any natural phenomenon, but the consistent shifting of attention (conscious VS subconscious) is a fact as empirically-proven as any can be.
So as a rough analogy, if you were a computer program, the conscious part of the execution would be kind of like log output from a thread monitoring certain internal states?
I suppose so(?, but it’s not an original take of mine, I just made a quick rough synthesis of rereading the section that you shared (particularly interesting the problem of illusionism: how to explain why we get the impression that our experiences are phenomenological), a quick rereading of attention schema theory, remembering EY saying that our confusion about something points to something that needs explaining and his points about what a scientifically adequate theory of consciousness should be able to explain (including the binding problem and the ‘causal’ ability of introspecting about the system), and basic facts that I knew of with basic introspection about things that seem as undeniably true as any possible observation we can made.
By the way, because of that I discover that an AI lab is trying to implement in AIs the cognitive structures that attention schema theory predicts that cause consciousness, with the aid of neuroscientists from Frankfurt and Princeton, and they are even funded with European funds. Pretty crazy stuff to think that my taxes fund people that we could reasonably say that are trying to create conscious AIs.
Funny that you should mention élan vital. The more I read about it, the more “consciousness” seems to me to be similarly incoherent and pseudoscientific as vitalism. This isn’t a fringe view and I’d recommend skimming the Rejection of the Problem section of the Hard problem of consciousness page on Wikipedia for additional context. It’s hard not to be confused about a term that isn’t coherent to begin with.
Supposing each scenario could be definitively classified as conscious or not, would that help you make any predictions about the world?
The cognitive system embedded within the body that is writing now (‘me’) sometimes registers certain things (‘feelings’) and sometimes doesn’t, I call the first “being conscious” and the second “not being conscious”. Then I notice that not all of the things that my body’s sensory systems register are registered as ‘conscious feelings’ all of the time (even while conscious), and that some people even report not being aware of their own sensorial perception of things like vision.
Whatever thing causes that difference in which things get recorded is what I call ‘consciousness’. Now I ask how that works.
Presumably, that it has the type of cognitive structures that allow an entity to feel (and maybe report) consistently feelings about the same sensory inputs in similar contexts.
I don’t know how well our intuition about ‘consciousness’ tracks any natural phenomenon, but the consistent shifting of attention (conscious VS subconscious) is a fact as empirically-proven as any can be.
So as a rough analogy, if you were a computer program, the conscious part of the execution would be kind of like log output from a thread monitoring certain internal states?
I suppose so(?, but it’s not an original take of mine, I just made a quick rough synthesis of rereading the section that you shared (particularly interesting the problem of illusionism: how to explain why we get the impression that our experiences are phenomenological), a quick rereading of attention schema theory, remembering EY saying that our confusion about something points to something that needs explaining and his points about what a scientifically adequate theory of consciousness should be able to explain (including the binding problem and the ‘causal’ ability of introspecting about the system), and basic facts that I knew of with basic introspection about things that seem as undeniably true as any possible observation we can made.
By the way, because of that I discover that an AI lab is trying to implement in AIs the cognitive structures that attention schema theory predicts that cause consciousness, with the aid of neuroscientists from Frankfurt and Princeton, and they are even funded with European funds. Pretty crazy stuff to think that my taxes fund people that we could reasonably say that are trying to create conscious AIs.
https://alientt.com/astound/