I now think that I have a very bad model of how David Cooper models the mind. Once you have something that is capable of modeling, and it models itself, then it notices its internal state. To me, that’s all sentience is. There’s nothing left to be explained.
I can’t even understand him. I don’t know what he thinks sentience is. To him, it’s neither a particle nor a pattern (or a set of patterns, or a cluster in patternspace, etc.), and I can’t make sense of [things] that aren’t non-physical but aren’t any of the above. If he compared his views to an existing philosophy then perhaps I could research it, but IIRC he hasn’t done that.
Nobody knows what it is, finally, but physicists are able to use the phrase “dark matter” to communicate with each other—if only to theorise and express puzzlement.
Someone can use a term like “consciousness” or “qualia” or “sentience” to talk about something that is not fully understood.
There is no pain particle, but a particle/matter/energy could potentially be sentient and feel pain. All matter could be sentient, but how would we detect that? Perhaps the brain has found some way to measure it in something, and to induce it in that same thing, but how it becomes part of a useful mechanism for controlling behaviour would remain a puzzle. Most philosophers talk complete and utter garbage about sentience and consciousness in general, so I don’t waste my time studying their output, but I’ve heard Chalmers talk some sense on the issue.
Looks like it—I use the word to mean sentience. A modelling program modelling itself won’t magically start feeling anything but merely builds an infinitely recursive database.
You use the word “sentience” to mean sentience? Tarski’s sentences don’t convey any information beyond a theory of truth.
Also, we’re modeling programs that model themselves, and we don’t fall into infinite recursion while doing so, so clearly it’s not necessarily true that any self-modeling program will result in infinite recursion.
“Sentience” is related to “sense”. It’s to do with feeling, not congition. “A modelling program modelling itself won’t magically start feeling anything ”. Note that the argument is about where the feeling comes from, not about recursion.
What is a feeling, except for an observation? “I feel warm” means that my heat sensors are saying “warm” which indicates that my body has a higher temperature than normal. Internal feelings (“I feel angry”) are simply observations about oneself, which are tied to a self-model. (You need a model to direct and make sense of your observations, and your observations then go on to change or reinforce your model. Your idea-of-your-current-internal-state is your emotional self-model.)
Maybe you can split this phenomenon into two parts and consider each on their own, but as I see it, observation and cognition are fundamentally connected. To treat the observation as independent of cognition is too much reductionism. (Or at least too much of a wrong form of reductionism.)
I now think that I have a very bad model of how David Cooper models the mind. Once you have something that is capable of modeling, and it models itself, then it notices its internal state. To me, that’s all sentience is. There’s nothing left to be explained.
So is Cooper just wrong, or using “sentience” differently?
I can’t even understand him. I don’t know what he thinks sentience is. To him, it’s neither a particle nor a pattern (or a set of patterns, or a cluster in patternspace, etc.), and I can’t make sense of [things] that aren’t non-physical but aren’t any of the above. If he compared his views to an existing philosophy then perhaps I could research it, but IIRC he hasn’t done that.
Do you understand what dark matter is?
Nobody knows what it is, finally, but physicists are able to use the phrase “dark matter” to communicate with each other—if only to theorise and express puzzlement.
Someone can use a term like “consciousness” or “qualia” or “sentience” to talk about something that is not fully understood.
There is no pain particle, but a particle/matter/energy could potentially be sentient and feel pain. All matter could be sentient, but how would we detect that? Perhaps the brain has found some way to measure it in something, and to induce it in that same thing, but how it becomes part of a useful mechanism for controlling behaviour would remain a puzzle. Most philosophers talk complete and utter garbage about sentience and consciousness in general, so I don’t waste my time studying their output, but I’ve heard Chalmers talk some sense on the issue.
Looks like it—I use the word to mean sentience. A modelling program modelling itself won’t magically start feeling anything but merely builds an infinitely recursive database.
You use the word “sentience” to mean sentience? Tarski’s sentences don’t convey any information beyond a theory of truth.
Also, we’re modeling programs that model themselves, and we don’t fall into infinite recursion while doing so, so clearly it’s not necessarily true that any self-modeling program will result in infinite recursion.
“Sentience” is related to “sense”. It’s to do with feeling, not congition. “A modelling program modelling itself won’t magically start feeling anything ”. Note that the argument is about where the feeling comes from, not about recursion.
What is a feeling, except for an observation? “I feel warm” means that my heat sensors are saying “warm” which indicates that my body has a higher temperature than normal. Internal feelings (“I feel angry”) are simply observations about oneself, which are tied to a self-model. (You need a model to direct and make sense of your observations, and your observations then go on to change or reinforce your model. Your idea-of-your-current-internal-state is your emotional self-model.)
Maybe you can split this phenomenon into two parts and consider each on their own, but as I see it, observation and cognition are fundamentally connected. To treat the observation as independent of cognition is too much reductionism. (Or at least too much of a wrong form of reductionism.)