How do you know that the algorithm doesn’t implement feelings? What’s the difference between a brain and a simulation of a brain that causes one to have feelings and the other not to?
(My take is that if we knew enough about brains, we’d have the answer to philosophical questions like this, in the way that knowing enough cell biology and organic chemistry resolved the questions about the mysterious differences between stuff that’s alive and stuff that isn’t,)
If we write conventional programs to run on conventional hardware, there’s no room for sentience to appear in those programs, so all we can do is make the program generate fictions about experiencing feelings which it didn’t actually experience at all. The brain is a neural computer though, and it’s very hard to work out how any neural net works once it’s become even a little complex, so it’s hard to rule out the possibility that sentience is somehow playing a role within that complexity. If sentience really exists in the brain and has a role in shaping the data generated by the brain, then there’s no reason why an artificial brain shouldn’t also have sentience in it performing the exact same role. If you simulated it on a computer though, you could reduce the whole thing to a conventional program which can be run by a Chinese Room processor, and in such a case we would be replacing any sentience with simulated sentience (with all the actual sentience removed). The ability to do that doesn’t negate the possibility for the sentience to be real though in the real brain though. But the big puzzle remains: how does the experience of feelings lead to data being generated to document that experience? That looks like an impossible process, and you have to wonder if we’re going to be able to convince AGI systems that there is such a thing as sentience at all.
Anyway, all I’m trying to do here is help people home in on the nature of the problem in the hope that this may speed up its resolution. The problem is in that translation from raw experience to data documenting it which must be put together by a data system—data is never generated by anything that isn’t a data system (which implements the rules about what represents what), and data systems have never been shown to be able to handle sentience as any part of their functionality, so we’re still waiting for someone to make a leap of the imagination there to hint at some way that might bridge that gap. It may go on for decades more without anyone making such a breakthrough, so I think it’s more likely that we’ll get answers by trying to trace back the data that the brain produces which makes claims about experiencing feelings to find out where and how that data was generated and whether it’s based in truth or fiction. As it stands, science doesn’t have any model that illustrates even the simplest implementation of sentience driving the generation of any data about itself, and that’s surprising when things like pain which seem so real and devastatingly strong are thought to have such a major role in controlling behaviour. And it’s that apparent strength which leads to so many people assuming sentience can appear with a functional role within systems which cannot support that (as well as in those that maybe, just maybe, can).
The brain is a neural computer though, and it’s very hard to work out how any neural net works once it’s become even a little complex, so it’s hard to rule out the possibility that sentience is somehow playing a role within that complexity.
We can’t know that there’s not some non-physical quality sitting inside our heads and pushing the neurons around however it fancies, so clearly it’s quite possible that this is the case! (It’s not. Unfalsifiability does not magically make something true.)
But the big puzzle remains: how does the experience of feelings lead to data being generated to document that experience? That looks like an impossible process, and you have to wonder if we’re going to be able to convince AGI systems that there is such a thing as sentience at all.
That’s the thing. It’s impossible. Every word you type can (as best we know) be traced back to the firing of neurons and atoms bopping around, with no room for Sentience to reach in and make you say things. (See Zombies? Zombies!) If something seems impossible to explain to an AGI, then maybe that thing doesn’t exist.
As it stands, science doesn’t have any model that illustrates even the simplest implementation of sentience driving the generation of any data about itself, and that’s surprising when things like pain which seem so real and devastatingly strong are thought to have such a major role in controlling behaviour.
I recommend reading Godel, Escher, Bach for, among many things, an explanation of a decent physicalist model of consciousness.
That’s the thing. It’s impossible. Every word you type can (as best we know) be traced back to the firing of neurons and atoms bopping around, with no room for Sentience to reach in and make you say things. (See Zombies? Zombies!) If something seems impossible to explain to an AGI, then maybe that thing doesn’t exist.
You also are treating “sentience is non physical” and “sentience is non existent” as the only options.
I recommend reading Godel, Escher, Bach for, among many things, an explanation of a decent physicalist model of consciousness.
It says nothing about qualia/sentience, it only deals with self, emergence and self-reference.
You also are treating “sentience is non physical” and “sentience is non existent” as the only options.
With his definition and model of sentience, yes, those are the only options (since he thinks that no merely physical process can contain sentience, as far as I can tell). I don’t think that’s actually how sentience works, which is why I said Sentience (I was trying to imply that I was referring to his definition and not mine).
It says nothing about qualia/sentience, it only deals with self, emergence and self-reference.
It doesn’t say anything about qualia per se because qualia are inherently non-physical, and it’s a physicalist model. It does discuss knowledge and thinking for a few chapters, especially focusing on a model that bridges the neuron-concept gap.
If qualia are non physical , and they exist , Hofstadter’s physicalist model must be a failure. So why bring it up? I really don’t see what you are driving at.
I confused “sentience” with “sapience”. I suppose that having “sense” as a component should have tipped me off… That renders most of my responses inane.
I should also maybe not assume that David Cooper’s use of a term is the only way it’s ever been used. He’s using both sentience and qualia to mean something ontologically basic, but that’s not the only use. I’d phrase it as “subjective experience,” but they mean essentially the same thing, and they definitely exist. They’re just not fundamental.
For sentience to be real and to have a role in our brains generating data to document its existence, it has to be physical (meaning part of physics) - it would have to interact in some way with the data system that produces that data, and that will show up as some kind of physical interaction, even if one side of it is hidden and appears to be something that we have written off as random noise.
So you think that sentience can alter the laws of physics, and make an atom go left instead of right? That is an extraordinary claim. And cognition is rather resilient to low-level noise, as it has to be—or else thermal noise would dominate our actions and experience.
It’s not an extraordinary claim: sentience would have to be part of the physics of what’s going on, and the extraordinary claim would be that sentience can have a causal role in data generation without any such interaction. To steer the generation of data (and affect what the data says), you have to interact with the system that’s generating the data in some way, and the only options are to do it using some physical method or by resorting to magic (which can’t really be magic, so again it’s really going to be some physical method).
In conventional computers we go to great lengths to avoid noise disrupting the computations, not least because they would typically cause bugs and crashes (and this happens in machines that are exposed to radiation, temperature extremes or voltage going out of the tolerable range). But the brain could allow something quantum to interact with neural nets in ways that we might mistake for noise (something which wouldn’t happen in a simulation of a neural computer on conventional hardware [unless this is taken into account by the simulation], and which also wouldn’t happen on a neural computer that isn’t built in such a way as to introduce a role for such a mechanism to operate).
It’s still hard to imagine a mechanism involving this that resolves the issue of how sentience has a causal role in anything (and how the data system can be made aware of it in order to generate data to document its existence), but it has to do so somehow if sentience is real.
It’s not an extraordinary claim: sentience would have to be part of the physics of what’s going on, and the extraordinary claim would be that sentience can have a causal role in data generation without any such interaction. To steer the generation of data (and affect what the data says), you have to interact with the system that’s generating the data in some way, and the only options are to do it using some physical method or by resorting to magic (which can’t really be magic, so again it’s really going to be some physical method).
The extraordinary claim is that there is another type of fundamental particle or interaction, and that you know this because sentience exists. (IIRC Hofstadter described how different “levels of reality” are somewhat “blocked off” from each other in practice, in that you don’t need to understand quantum mechanics to know how biology works and so on. This would suggest that it is very unlikely that the highest level could indicate much about the lowest level.)
In conventional computers we go to great lengths to avoid noise disrupting the computations, not least because they would typically cause bugs and crashes (and this happens in machines that are exposed to radiation, temperature extremes or voltage going out of the tolerable range). But the brain could allow something quantum to interact with neural nets in ways that we might mistake for noise (something which wouldn’t happen in a simulation of a neural computer on conventional hardware [unless this is taken into account by the simulation], and which also wouldn’t happen on a neural computer that isn’t built in such a way as to introduce a role for such a mechanism to operate).
This could happen, but AFAIK that would require the brain to be vulnerable to slight fluctuations, which it doesn’t appear to be. (The scientifically-phrased quantum mind hypothesis by Penrose wasn’t immediately rejected for this reason, so I suspect there’s something wrong with this reasoning. It was, however, falsified.)
Anyway,even if this were true, how would you know that?
It’s still hard to imagine a mechanism involving this that resolves the issue of how sentience has a causal role in anything (and how the data system can be made aware of it in order to generate data to document its existence), but it has to do so somehow if sentience is real.
If it doesn’t explain sentience any more than Mere Classical Physics does, then why even bring Quantum into it? (And if it doesn’t explain it but you feel that it should, maybe your model is wrong and you should consider inspecting your intuitions and your reasoning around them.)
“The extraordinary claim is that there is another type of fundamental particle or interaction, and that you know this because sentience exists.”
With conventional computers we can prove that there’s no causal role for sentience in them by running the program on a Chinese Room processor. Something extra is required for sentience to be real, and we have no model for introducing that extra thing. A simulation on conventional computer hardware of a system with sentience in it (where there is simulated sentience rather than real sentience) would have to simulate that something extra in order for that simulated sentience to appear in it. If that extra something doesn’t exist, there is no sentience.
“This could happen, but AFAIK that would require the brain to be vulnerable to slight fluctuations, which it doesn’t appear to be.”
Every interaction is quantum, and when you have neural nets working on mechanisms that are too hard to untangle, there are opportunities for some kind of mechanism being involved that we can’t yet observe. What we can actually model appears to tell us that sentience must be a fiction, but we believe that things like pain feel too real to be fake.
“Anyway, even if this were true, how would you know that?”
Unless someone comes up with a theoretical model that shows a way for sentience to have a real role, we aren’t going to get answers until we can see the full mechanism by which damage signals lead to the brain generating data that makes claims about an experience of pain. If, once we have that full mechanism, we see that the brain is merely mapping data to inputs by applying rules that generate fictions about feelings, then we’ll know that feelings are fake. If they aren’t fake though, we’ll see sentience in action and we’ll discover how it works (and thereby find out what we actually are).
“If it doesn’t explain sentience any more than Mere Classical Physics does, then why even bring Quantum into it?”
If classical physics doesn’t support a model that enables sentience to be real, we will either have to reject the idea of sentience or look for it elsewhere.
(And if it doesn’t explain it but you feel that it should, maybe your model is wrong and you should consider inspecting your intuitions and your reasoning around them.)
If sentience is real, all the models are wrong because none of them show sentience working in any causal way which enables them to drive the generation of data to document the existence of sentience. All the models shout at us that there is no sentience in there playing any viable role and that it’s all wishful thinking, while our experience of feelings shouts at us that they are very real.
All I want to see is a model that illustrates the simplest role for sentience. If we have a sensor, a processor and a response, we can call the sensor a “pain” sensor and run a program that makes a motor function to remove the device away from the thing that might be damaging it, and we could call this a pain response, but there’s no pain there—there’s just the assertion of someone looking at it that pain is involved because that person wants the system to be like him/herself—“I feel pain in that situation, therefore that device must feel pain.” But no—there is no role for pain there. If we run a more intelligent program on the processor, we can put some data in memory which says “Ouch! That hurt!”, and whenever an input comes from the “pain” sensor, we can have the program make the device display “Ouch!” That hurt!” on a screen. The person looking on can now say, “There you go! That’s the proof that it felt pain!” Again though, there’s no pain involved—we can edit the data so that it puts “Oh Yes! Give me more of that!” whenever a signal comes from the “pain” sensor, and it then becomes obvious that this data tells us nothing about any real experience at all.
With a more intelligent program, it can understand the idea of damage and damage avoidance, so it can make sure the the data that’s mapped to different inputs makes more sense, but the true data should say “I received data from a sensor that indicates likely damage” rather than “that hurt”. The latter claim asserts the existence of sentience, while the former one doesn’t. If we ask the device if it really felt pain, it should only say yes if there was actually pain there, and with a conventional processor, we know that there isn’t any. If we build such a device and keep triggering the sensor to make it generate the claim that it’s felt pain, we know that it’s just making it up about feeling pain—we can’t actually make it suffer by torturing it, but will just cause it to go on repeating its fake claim.
“We can’t know that there’s not some non-physical quality sitting inside our heads and pushing the neurons around however it fancies, so clearly it’s quite possible that this is the case! (It’s not. Unfalsifiability does not magically make something true.)”
Whatever that thing would be, it would still have to be a real physical thing of some kind in order to exist and to interact with other things in the same physical system. It cannot suffer if it is nothing. It cannot suffer if it is just a pattern. It cannot suffer if it is just complexity.
“That’s the thing. It’s impossible. Every word you type can (as best we know) be traced back to the firing of neurons and atoms bopping around, with no room for Sentience to reach in and make you say things. (See Zombies? Zombies!) If something seems impossible to explain to an AGI, then maybe that thing doesn’t exist.”
But if the sentience doesn’t exist, there is no suffering and no role for morality. Maybe that will turn out to be the case—we might find out some day if we can ever trace how the data the brain generates about sentience is generated and see the full chain of causation.
“I recommend reading Godel, Escher, Bach for, among many things, an explanation of a decent physicalist model of consciousness.”
I’ll hunt for that some time, but it can’t be any good or it would be better known if such a model existed.
Whatever that thing would be, it would still have to be a real physical thing of some kind in order to exist and to interact with other things in the same physical system.
On the fundamental level, there are some particles that interact with other particles in a regular fashion. On a higher level, patterns interact with other patterns. This is analogous to how water waves can interact. (It’s the result of the regularity, and other things as well.) The pattern is definitely real—it’s a pattern in a real thing—and it can “affect” lower levels in that the particular arrangement of particles corresponding to the pattern of “physicist and particle accelerator” describes a system which interacts with other particles which then collide at high speeds. None of this requires physicists to be ontologically basic in order to interact with particles.
It cannot suffer if it is nothing. It cannot suffer if it is just a pattern. It cannot suffer if it is just complexity.
Patterns aren’t nothing. They’re the only thing we ever interact with, in practice. The only thing that makes your chair a chair is the pattern of atoms. If the atoms were kept the same but the pattern changed, it could be anything from a pile of wood chips to a slurry of CHON.
But if the sentience doesn’t exist, there is no suffering and no role for morality.
Not true. Suppose that it were proven to you, to your satisfaction, that you are wrong about the nature of sentience. Would you lose all motivation, and capacity for emotion? If not, then morality is still useful. (If you can’t imagine yourself being wrong, then That’s Bad and you should go read the Sequences.)
Maybe that will turn out to be the case—we might find out some day if we can ever trace how the data the brain generates about sentience is generated and see the full chain of causation.
Something being understandable or just made of atoms should not make it unimportant. See Joy in the Merely Real.
It’s possible that I’m misunderstanding you, and that the course of events you describe isn’t “we understand why we feel we have sentience and so it doesn’t exist” or “we discover that our apparent sentience is produced by mere mechanical processes and so sentience doesn’t exist.” But that’s my current best interpretation.
I’ll hunt for that some time, but it can’t be any good or it would be better known if such a model existed.
Better known to you? Why would you think that you already know most everything useful or important that society has produced? Do you think that modern society’s recognition and dissemination of Good Ideas is particularly good, or that you’re very good at searching out obscure truths?
Do you imagine that patterns can suffer; that they can be tortured?
“Not true. Suppose that it were proven to you, to your satisfaction, that you are wrong about the nature of sentience. Would you lose all motivation, and capacity for emotion? If not, then morality is still useful. (If you can’t imagine yourself being wrong, then That’s Bad and you should go read the Sequences.)”
If there is no suffering and all we have is a pretence of suffering, there is no need to protect anyone from anything—we would end up being no different from a computer programmed to put the word “Ouch!” on the screen every time a key is pressed.
“Something being understandable or just made of atoms should not make it unimportant. See Joy in the Merely Real.”
Is it wrong to press keys on the computer which keeps displaying the word “Ouch!”?
“It’s possible that I’m misunderstanding you, and that the course of events you describe isn’t “we understand why we feel we have sentience and so it doesn’t exist” or “we discover that our apparent sentience is produced by mere mechanical processes and so sentience doesn’t exist.” But that’s my current best interpretation.”
My position is quite clear: we have no model for how sentience plays a role in any system that generates data that supposedly documents the experiencing of feelings, and anyone who just imagines them into a model where they have no causal role on any of the action is not building a model that explains nothing.
“Better known to you?”
Better known to science. If there was a model for this, it would be up there in golden lights because it would answer the biggest mystery of them all.
“Why would you think that you already know most everything useful or important that society has produced? Do you think that modern society’s recognition and dissemination of Good Ideas is particularly good, or that you’re very good at searching out obscure truths?”
If there was a model that explained the functionality of sentience, it wouldn’t be kept hidden away when so many people are asking to see it. You have no such model.
Do you imagine that patterns can suffer; that they can be tortured?
Yes, I do. I don’t imagine that every pattern can.
Clarification: by “pattern” I mean an arrangement of parts where the important qualities of the arrangement, the qualities that we use to determine whether it is [a thing] or not, are more dependent on the arrangement itself than on the internal workings of each part. Anything where the whole is more than the parts, one might say, but that would depend on what is meant by “more”.
If there is no suffering and all we have is a pretence of suffering, there is no need to protect anyone from anything—we would end up being no different from a computer programmed to put the word “Ouch!” on the screen every time a key is pressed.
You didn’t answer my question. Would pain still hurt? Would food still taste good? And so on. You have an internal experience, and it won’t go away even if you are a purely physical thing made out of mere ordinary atoms moving mindlessly.
Is it wrong to press keys on the computer which keeps displaying the word “Ouch!”?
That depends on whether I have reason to think that the computer is simulating a conscious being, changing the simulation depending on my input, and then printing a text-representation of the conscious being’s experience or words.
Is it wrong to kick a box which keeps saying “Ouch!”? It could have a person inside, or just a machine programmed to play a recorded “ouch” sound whenever the box shakes. (What I mean by this is that your thought experiment doesn’t indicate much about computers—the same issue could be found with about as much absurdity elsewhere.)
My position is quite clear: we have no model for how sentience plays a role in any system that generates data that supposedly documents the experiencing of feelings, and anyone who just imagines them into a model where they have no causal role on any of the action is not building a model that explains nothing.
Nobody’s saying that sentience doesn’t have any causal role on things. That’s insane. How could we talk about sentience if sentience couldn’t affect the world?
I think that you’re considering feelings to be ontologically basic, as if you could say “I feel pain” and be wrong, not because you are lying but because there’s no Pain inside your brain. Thoughts, feelings, all these internal things are the brain’s computations themselves. It doesn’t have to accurately record an external property—it just has to describe itself.
Better known to science. If there was a model for this, it would be up there in golden lights because it would answer the biggest mystery of them all.
Perhaps people disagree with you about the relative size of mysteries. That should be a possibility that you consider before assuming that something isn’t important because it hasn’t been Up In Golden Lights to the point that you’ve heard of it.
(And anyway, GEB won the Pulitzer Prize! It’s been called a major literary event! MIT built an entire course around it once! I found all this by looking for less than a minute on Wikipedia. Are you seriously so certain of yourself that if you haven’t heard of a book before, it’s not worth it to you to spend half a minute on its Wikipedia page before rejecting it simply because you’ve never heard of it?)
If there was a model that explained the functionality of sentience, it wouldn’t be kept hidden away when so many people are asking to see it. You have no such model.
What do you mean, “so many people are asking to see it”? And I’ve never claimed that it’s been “kept hidden away”. GEB is a fairly well-known book, and I haven’t even claimed that GEB’s description of thoughts is the best or most relevant model. That chapter is a popularization of neuropsychology to the point that a decently educated and thoughtful layman can understand it, and it’s necessarily less specific and detailed than the entire body of neuropsychological information. Go ask an actual neuropsychologist if you want to learn more. Just because people haven’t read your mind and dumped relatively niche information on your lap without you even asking them doesn’t mean that they don’t have it.
What do you mean, “so many people are asking to see it”? And I’ve never claimed that it’s been “kept hidden away”. GEB is a fairly well-known book, and I haven’t even claimed that GEB’s description of thoughts is the best or most relevant model. That chapter is a popularization of neuropsychology to the point that a decently educated and thoughtful layman can understand it, and it’s necessarily less specific and detailed than the entire body of neuropsychological information. Go ask an actual neuropsychologist if you want to learn more
I pointed out before that GEB isn’t specifically relevant to sentience. It’s less detailed than the entire body of neuropsychological information, but that still doesn’t contain an explanation of sentience, as Cooper correctly points out.
I now think that I have a very bad model of how David Cooper models the mind. Once you have something that is capable of modeling, and it models itself, then it notices its internal state. To me, that’s all sentience is. There’s nothing left to be explained.
I can’t even understand him. I don’t know what he thinks sentience is. To him, it’s neither a particle nor a pattern (or a set of patterns, or a cluster in patternspace, etc.), and I can’t make sense of [things] that aren’t non-physical but aren’t any of the above. If he compared his views to an existing philosophy then perhaps I could research it, but IIRC he hasn’t done that.
Nobody knows what it is, finally, but physicists are able to use the phrase “dark matter” to communicate with each other—if only to theorise and express puzzlement.
Someone can use a term like “consciousness” or “qualia” or “sentience” to talk about something that is not fully understood.
There is no pain particle, but a particle/matter/energy could potentially be sentient and feel pain. All matter could be sentient, but how would we detect that? Perhaps the brain has found some way to measure it in something, and to induce it in that same thing, but how it becomes part of a useful mechanism for controlling behaviour would remain a puzzle. Most philosophers talk complete and utter garbage about sentience and consciousness in general, so I don’t waste my time studying their output, but I’ve heard Chalmers talk some sense on the issue.
Looks like it—I use the word to mean sentience. A modelling program modelling itself won’t magically start feeling anything but merely builds an infinitely recursive database.
You use the word “sentience” to mean sentience? Tarski’s sentences don’t convey any information beyond a theory of truth.
Also, we’re modeling programs that model themselves, and we don’t fall into infinite recursion while doing so, so clearly it’s not necessarily true that any self-modeling program will result in infinite recursion.
“Sentience” is related to “sense”. It’s to do with feeling, not congition. “A modelling program modelling itself won’t magically start feeling anything ”. Note that the argument is about where the feeling comes from, not about recursion.
What is a feeling, except for an observation? “I feel warm” means that my heat sensors are saying “warm” which indicates that my body has a higher temperature than normal. Internal feelings (“I feel angry”) are simply observations about oneself, which are tied to a self-model. (You need a model to direct and make sense of your observations, and your observations then go on to change or reinforce your model. Your idea-of-your-current-internal-state is your emotional self-model.)
Maybe you can split this phenomenon into two parts and consider each on their own, but as I see it, observation and cognition are fundamentally connected. To treat the observation as independent of cognition is too much reductionism. (Or at least too much of a wrong form of reductionism.)
“Clarification: by “pattern” I mean an arrangement of parts where the important qualities of the arrangement, the qualities that we use to determine whether it is [a thing] or not, are more dependent on the arrangement itself than on the internal workings of each part. Anything where the whole is more than the parts, one might say, but that would depend on what is meant by “more”.”
There is no situation where the whole is more than the parts—if anything new is emerging, it is a new part coming from somewhere not previously declared.
“You didn’t answer my question. Would pain still hurt? Would food still taste good? And so on. You have an internal experience, and it won’t go away even if you are a purely physical thing made out of mere ordinary atoms moving mindlessly.”
No—it wouldn’t hurt and all other feelings would be imaginary too. The reason they feel too real for that to be the case though is an indication that they are real.
“Is it wrong to press keys on the computer which keeps displaying the word “Ouch!”?” --> That depends on whether I have reason to think that the computer is simulating a conscious being, changing the simulation depending on my input, and then printing a text-representation of the conscious being’s experience or words.”
So if it’s just producing fake assertions, it isn’t wrong. And if we are just producing fake assertions, there is nothing wrong about “torturing” people either.
“Is it wrong to kick a box which keeps saying “Ouch!”? It could have a person inside, or just a machine programmed to play a recorded “ouch” sound whenever the box shakes. (What I mean by this is that your thought experiment doesn’t indicate much about computers—the same issue could be found with about as much absurdity elsewhere.)”
If we have followed the trail to see how the data is generated, we are not kicking a box with unknown content—if the trail shows us that the data is nothing but fake assertions, we are kicking a non-conscious box.
“Nobody’s saying that sentience doesn’t have any causal role on things. That’s insane. How could we talk about sentience if sentience couldn’t affect the world?”
In which case we should be able to follow the trail and see the causation in action, thereby either uncovering the mechanism of sentience or showing that there isn’t any.
“I think that you’re considering feelings to be ontologically basic, as if you could say “I feel pain” and be wrong, not because you are lying but because there’s no Pain inside your brain. Thoughts, feelings, all these internal things are the brain’s computations themselves. It doesn’t have to accurately record an external property—it just has to describe itself.”
If you’re wrong in thinking you feel pain, there is no pain.
“Perhaps people disagree with you about the relative size of mysteries. That should be a possibility that you consider before assuming that something isn’t important because it hasn’t been Up In Golden Lights to the point that you’ve heard of it.
What are you on about—it’s precisely because this is the most important question of them all that it should be up in golden lights.
“(And anyway, GEB won the Pulitzer Prize! It’s been called a major literary event!”
All manner of crap wins prizes of that kind.
″...it’s not worth it to you to spend half a minute on its Wikipedia page before rejecting it simply because you’ve never heard of it?)”
If it had a model showing the role of sentience in the system, the big question would have been answered and we wouldn’t have a continual stream of books and articles asking the question and searching desperately for answers that haven’t been found by anyone.
“What do you mean, “so many people are asking to see it”? And I’ve never claimed that it’s been “kept hidden away”.”
I mean exactly what I said—everyone’s asking for answers, and none of them have found answers where you claim they lie waiting to be discovered.
″ GEB is a fairly well-known book, and I haven’t even claimed that GEB’s description of thoughts is the best or most relevant model. That chapter is a popularization of neuropsychology to the point that a decently educated and thoughtful layman can understand it, and it’s necessarily less specific and detailed than the entire body of neuropsychological information. Go ask an actual neuropsychologist if you want to learn more. Just because people haven’t read your mind and dumped relatively niche information on your lap without you even asking them doesn’t mean that they don’t have it.”
It doesn’t answer the question. There are plenty of experts on the brain and its functionality, but none of them know how consciousness or sentience works.
Thank you very much for answering.
How do you know that the algorithm doesn’t implement feelings? What’s the difference between a brain and a simulation of a brain that causes one to have feelings and the other not to?
(My take is that if we knew enough about brains, we’d have the answer to philosophical questions like this, in the way that knowing enough cell biology and organic chemistry resolved the questions about the mysterious differences between stuff that’s alive and stuff that isn’t,)
Thanks for the questions.
If we write conventional programs to run on conventional hardware, there’s no room for sentience to appear in those programs, so all we can do is make the program generate fictions about experiencing feelings which it didn’t actually experience at all. The brain is a neural computer though, and it’s very hard to work out how any neural net works once it’s become even a little complex, so it’s hard to rule out the possibility that sentience is somehow playing a role within that complexity. If sentience really exists in the brain and has a role in shaping the data generated by the brain, then there’s no reason why an artificial brain shouldn’t also have sentience in it performing the exact same role. If you simulated it on a computer though, you could reduce the whole thing to a conventional program which can be run by a Chinese Room processor, and in such a case we would be replacing any sentience with simulated sentience (with all the actual sentience removed). The ability to do that doesn’t negate the possibility for the sentience to be real though in the real brain though. But the big puzzle remains: how does the experience of feelings lead to data being generated to document that experience? That looks like an impossible process, and you have to wonder if we’re going to be able to convince AGI systems that there is such a thing as sentience at all.
Anyway, all I’m trying to do here is help people home in on the nature of the problem in the hope that this may speed up its resolution. The problem is in that translation from raw experience to data documenting it which must be put together by a data system—data is never generated by anything that isn’t a data system (which implements the rules about what represents what), and data systems have never been shown to be able to handle sentience as any part of their functionality, so we’re still waiting for someone to make a leap of the imagination there to hint at some way that might bridge that gap. It may go on for decades more without anyone making such a breakthrough, so I think it’s more likely that we’ll get answers by trying to trace back the data that the brain produces which makes claims about experiencing feelings to find out where and how that data was generated and whether it’s based in truth or fiction. As it stands, science doesn’t have any model that illustrates even the simplest implementation of sentience driving the generation of any data about itself, and that’s surprising when things like pain which seem so real and devastatingly strong are thought to have such a major role in controlling behaviour. And it’s that apparent strength which leads to so many people assuming sentience can appear with a functional role within systems which cannot support that (as well as in those that maybe, just maybe, can).
We can’t know that there’s not some non-physical quality sitting inside our heads and pushing the neurons around however it fancies, so clearly it’s quite possible that this is the case! (It’s not. Unfalsifiability does not magically make something true.)
That’s the thing. It’s impossible. Every word you type can (as best we know) be traced back to the firing of neurons and atoms bopping around, with no room for Sentience to reach in and make you say things. (See Zombies? Zombies!) If something seems impossible to explain to an AGI, then maybe that thing doesn’t exist.
I recommend reading Godel, Escher, Bach for, among many things, an explanation of a decent physicalist model of consciousness.
You also are treating “sentience is non physical” and “sentience is non existent” as the only options.
It says nothing about qualia/sentience, it only deals with self, emergence and self-reference.
With his definition and model of sentience, yes, those are the only options (since he thinks that no merely physical process can contain sentience, as far as I can tell). I don’t think that’s actually how sentience works, which is why I said Sentience (I was trying to imply that I was referring to his definition and not mine).
It doesn’t say anything about qualia per se because qualia are inherently non-physical, and it’s a physicalist model. It does discuss knowledge and thinking for a few chapters, especially focusing on a model that bridges the neuron-concept gap.
If qualia are non physical , and they exist , Hofstadter’s physicalist model must be a failure. So why bring it up? I really don’t see what you are driving at.
I confused “sentience” with “sapience”. I suppose that having “sense” as a component should have tipped me off… That renders most of my responses inane.
I should also maybe not assume that David Cooper’s use of a term is the only way it’s ever been used. He’s using both sentience and qualia to mean something ontologically basic, but that’s not the only use. I’d phrase it as “subjective experience,” but they mean essentially the same thing, and they definitely exist. They’re just not fundamental.
For sentience to be real and to have a role in our brains generating data to document its existence, it has to be physical (meaning part of physics) - it would have to interact in some way with the data system that produces that data, and that will show up as some kind of physical interaction, even if one side of it is hidden and appears to be something that we have written off as random noise.
So you think that sentience can alter the laws of physics, and make an atom go left instead of right? That is an extraordinary claim. And cognition is rather resilient to low-level noise, as it has to be—or else thermal noise would dominate our actions and experience.
It’s not an extraordinary claim: sentience would have to be part of the physics of what’s going on, and the extraordinary claim would be that sentience can have a causal role in data generation without any such interaction. To steer the generation of data (and affect what the data says), you have to interact with the system that’s generating the data in some way, and the only options are to do it using some physical method or by resorting to magic (which can’t really be magic, so again it’s really going to be some physical method).
In conventional computers we go to great lengths to avoid noise disrupting the computations, not least because they would typically cause bugs and crashes (and this happens in machines that are exposed to radiation, temperature extremes or voltage going out of the tolerable range). But the brain could allow something quantum to interact with neural nets in ways that we might mistake for noise (something which wouldn’t happen in a simulation of a neural computer on conventional hardware [unless this is taken into account by the simulation], and which also wouldn’t happen on a neural computer that isn’t built in such a way as to introduce a role for such a mechanism to operate).
It’s still hard to imagine a mechanism involving this that resolves the issue of how sentience has a causal role in anything (and how the data system can be made aware of it in order to generate data to document its existence), but it has to do so somehow if sentience is real.
The extraordinary claim is that there is another type of fundamental particle or interaction, and that you know this because sentience exists. (IIRC Hofstadter described how different “levels of reality” are somewhat “blocked off” from each other in practice, in that you don’t need to understand quantum mechanics to know how biology works and so on. This would suggest that it is very unlikely that the highest level could indicate much about the lowest level.)
This could happen, but AFAIK that would require the brain to be vulnerable to slight fluctuations, which it doesn’t appear to be. (The scientifically-phrased quantum mind hypothesis by Penrose wasn’t immediately rejected for this reason, so I suspect there’s something wrong with this reasoning. It was, however, falsified.)
Anyway, even if this were true, how would you know that?
If it doesn’t explain sentience any more than Mere Classical Physics does, then why even bring Quantum into it? (And if it doesn’t explain it but you feel that it should, maybe your model is wrong and you should consider inspecting your intuitions and your reasoning around them.)
“The extraordinary claim is that there is another type of fundamental particle or interaction, and that you know this because sentience exists.”
With conventional computers we can prove that there’s no causal role for sentience in them by running the program on a Chinese Room processor. Something extra is required for sentience to be real, and we have no model for introducing that extra thing. A simulation on conventional computer hardware of a system with sentience in it (where there is simulated sentience rather than real sentience) would have to simulate that something extra in order for that simulated sentience to appear in it. If that extra something doesn’t exist, there is no sentience.
“This could happen, but AFAIK that would require the brain to be vulnerable to slight fluctuations, which it doesn’t appear to be.”
Every interaction is quantum, and when you have neural nets working on mechanisms that are too hard to untangle, there are opportunities for some kind of mechanism being involved that we can’t yet observe. What we can actually model appears to tell us that sentience must be a fiction, but we believe that things like pain feel too real to be fake.
“Anyway, even if this were true, how would you know that?”
Unless someone comes up with a theoretical model that shows a way for sentience to have a real role, we aren’t going to get answers until we can see the full mechanism by which damage signals lead to the brain generating data that makes claims about an experience of pain. If, once we have that full mechanism, we see that the brain is merely mapping data to inputs by applying rules that generate fictions about feelings, then we’ll know that feelings are fake. If they aren’t fake though, we’ll see sentience in action and we’ll discover how it works (and thereby find out what we actually are).
“If it doesn’t explain sentience any more than Mere Classical Physics does, then why even bring Quantum into it?”
If classical physics doesn’t support a model that enables sentience to be real, we will either have to reject the idea of sentience or look for it elsewhere.
(And if it doesn’t explain it but you feel that it should, maybe your model is wrong and you should consider inspecting your intuitions and your reasoning around them.)
If sentience is real, all the models are wrong because none of them show sentience working in any causal way which enables them to drive the generation of data to document the existence of sentience. All the models shout at us that there is no sentience in there playing any viable role and that it’s all wishful thinking, while our experience of feelings shouts at us that they are very real.
All I want to see is a model that illustrates the simplest role for sentience. If we have a sensor, a processor and a response, we can call the sensor a “pain” sensor and run a program that makes a motor function to remove the device away from the thing that might be damaging it, and we could call this a pain response, but there’s no pain there—there’s just the assertion of someone looking at it that pain is involved because that person wants the system to be like him/herself—“I feel pain in that situation, therefore that device must feel pain.” But no—there is no role for pain there. If we run a more intelligent program on the processor, we can put some data in memory which says “Ouch! That hurt!”, and whenever an input comes from the “pain” sensor, we can have the program make the device display “Ouch!” That hurt!” on a screen. The person looking on can now say, “There you go! That’s the proof that it felt pain!” Again though, there’s no pain involved—we can edit the data so that it puts “Oh Yes! Give me more of that!” whenever a signal comes from the “pain” sensor, and it then becomes obvious that this data tells us nothing about any real experience at all.
With a more intelligent program, it can understand the idea of damage and damage avoidance, so it can make sure the the data that’s mapped to different inputs makes more sense, but the true data should say “I received data from a sensor that indicates likely damage” rather than “that hurt”. The latter claim asserts the existence of sentience, while the former one doesn’t. If we ask the device if it really felt pain, it should only say yes if there was actually pain there, and with a conventional processor, we know that there isn’t any. If we build such a device and keep triggering the sensor to make it generate the claim that it’s felt pain, we know that it’s just making it up about feeling pain—we can’t actually make it suffer by torturing it, but will just cause it to go on repeating its fake claim.
“We can’t know that there’s not some non-physical quality sitting inside our heads and pushing the neurons around however it fancies, so clearly it’s quite possible that this is the case! (It’s not. Unfalsifiability does not magically make something true.)”
Whatever that thing would be, it would still have to be a real physical thing of some kind in order to exist and to interact with other things in the same physical system. It cannot suffer if it is nothing. It cannot suffer if it is just a pattern. It cannot suffer if it is just complexity.
“That’s the thing. It’s impossible. Every word you type can (as best we know) be traced back to the firing of neurons and atoms bopping around, with no room for Sentience to reach in and make you say things. (See Zombies? Zombies!) If something seems impossible to explain to an AGI, then maybe that thing doesn’t exist.”
But if the sentience doesn’t exist, there is no suffering and no role for morality. Maybe that will turn out to be the case—we might find out some day if we can ever trace how the data the brain generates about sentience is generated and see the full chain of causation.
“I recommend reading Godel, Escher, Bach for, among many things, an explanation of a decent physicalist model of consciousness.”
I’ll hunt for that some time, but it can’t be any good or it would be better known if such a model existed.
On the fundamental level, there are some particles that interact with other particles in a regular fashion. On a higher level, patterns interact with other patterns. This is analogous to how water waves can interact. (It’s the result of the regularity, and other things as well.) The pattern is definitely real—it’s a pattern in a real thing—and it can “affect” lower levels in that the particular arrangement of particles corresponding to the pattern of “physicist and particle accelerator” describes a system which interacts with other particles which then collide at high speeds. None of this requires physicists to be ontologically basic in order to interact with particles.
Patterns aren’t nothing. They’re the only thing we ever interact with, in practice. The only thing that makes your chair a chair is the pattern of atoms. If the atoms were kept the same but the pattern changed, it could be anything from a pile of wood chips to a slurry of CHON.
Not true. Suppose that it were proven to you, to your satisfaction, that you are wrong about the nature of sentience. Would you lose all motivation, and capacity for emotion? If not, then morality is still useful. (If you can’t imagine yourself being wrong, then That’s Bad and you should go read the Sequences.)
Something being understandable or just made of atoms should not make it unimportant. See Joy in the Merely Real.
It’s possible that I’m misunderstanding you, and that the course of events you describe isn’t “we understand why we feel we have sentience and so it doesn’t exist” or “we discover that our apparent sentience is produced by mere mechanical processes and so sentience doesn’t exist.” But that’s my current best interpretation.
Better known to you? Why would you think that you already know most everything useful or important that society has produced? Do you think that modern society’s recognition and dissemination of Good Ideas is particularly good, or that you’re very good at searching out obscure truths?
“Patterns aren’t nothing.”
Do you imagine that patterns can suffer; that they can be tortured?
“Not true. Suppose that it were proven to you, to your satisfaction, that you are wrong about the nature of sentience. Would you lose all motivation, and capacity for emotion? If not, then morality is still useful. (If you can’t imagine yourself being wrong, then That’s Bad and you should go read the Sequences.)”
If there is no suffering and all we have is a pretence of suffering, there is no need to protect anyone from anything—we would end up being no different from a computer programmed to put the word “Ouch!” on the screen every time a key is pressed.
“Something being understandable or just made of atoms should not make it unimportant. See Joy in the Merely Real.”
Is it wrong to press keys on the computer which keeps displaying the word “Ouch!”?
“It’s possible that I’m misunderstanding you, and that the course of events you describe isn’t “we understand why we feel we have sentience and so it doesn’t exist” or “we discover that our apparent sentience is produced by mere mechanical processes and so sentience doesn’t exist.” But that’s my current best interpretation.”
My position is quite clear: we have no model for how sentience plays a role in any system that generates data that supposedly documents the experiencing of feelings, and anyone who just imagines them into a model where they have no causal role on any of the action is not building a model that explains nothing.
“Better known to you?”
Better known to science. If there was a model for this, it would be up there in golden lights because it would answer the biggest mystery of them all.
“Why would you think that you already know most everything useful or important that society has produced? Do you think that modern society’s recognition and dissemination of Good Ideas is particularly good, or that you’re very good at searching out obscure truths?”
If there was a model that explained the functionality of sentience, it wouldn’t be kept hidden away when so many people are asking to see it. You have no such model.
Yes, I do. I don’t imagine that every pattern can.
Clarification: by “pattern” I mean an arrangement of parts where the important qualities of the arrangement, the qualities that we use to determine whether it is [a thing] or not, are more dependent on the arrangement itself than on the internal workings of each part. Anything where the whole is more than the parts, one might say, but that would depend on what is meant by “more”.
You didn’t answer my question. Would pain still hurt? Would food still taste good? And so on. You have an internal experience, and it won’t go away even if you are a purely physical thing made out of mere ordinary atoms moving mindlessly.
That depends on whether I have reason to think that the computer is simulating a conscious being, changing the simulation depending on my input, and then printing a text-representation of the conscious being’s experience or words.
Is it wrong to kick a box which keeps saying “Ouch!”? It could have a person inside, or just a machine programmed to play a recorded “ouch” sound whenever the box shakes. (What I mean by this is that your thought experiment doesn’t indicate much about computers—the same issue could be found with about as much absurdity elsewhere.)
Nobody’s saying that sentience doesn’t have any causal role on things. That’s insane. How could we talk about sentience if sentience couldn’t affect the world?
I think that you’re considering feelings to be ontologically basic, as if you could say “I feel pain” and be wrong, not because you are lying but because there’s no Pain inside your brain. Thoughts, feelings, all these internal things are the brain’s computations themselves. It doesn’t have to accurately record an external property—it just has to describe itself.
Perhaps people disagree with you about the relative size of mysteries. That should be a possibility that you consider before assuming that something isn’t important because it hasn’t been Up In Golden Lights to the point that you’ve heard of it.
(And anyway, GEB won the Pulitzer Prize! It’s been called a major literary event! MIT built an entire course around it once! I found all this by looking for less than a minute on Wikipedia. Are you seriously so certain of yourself that if you haven’t heard of a book before, it’s not worth it to you to spend half a minute on its Wikipedia page before rejecting it simply because you’ve never heard of it?)
What do you mean, “so many people are asking to see it”? And I’ve never claimed that it’s been “kept hidden away”. GEB is a fairly well-known book, and I haven’t even claimed that GEB’s description of thoughts is the best or most relevant model. That chapter is a popularization of neuropsychology to the point that a decently educated and thoughtful layman can understand it, and it’s necessarily less specific and detailed than the entire body of neuropsychological information. Go ask an actual neuropsychologist if you want to learn more. Just because people haven’t read your mind and dumped relatively niche information on your lap without you even asking them doesn’t mean that they don’t have it.
I pointed out before that GEB isn’t specifically relevant to sentience. It’s less detailed than the entire body of neuropsychological information, but that still doesn’t contain an explanation of sentience, as Cooper correctly points out.
I now think that I have a very bad model of how David Cooper models the mind. Once you have something that is capable of modeling, and it models itself, then it notices its internal state. To me, that’s all sentience is. There’s nothing left to be explained.
So is Cooper just wrong, or using “sentience” differently?
I can’t even understand him. I don’t know what he thinks sentience is. To him, it’s neither a particle nor a pattern (or a set of patterns, or a cluster in patternspace, etc.), and I can’t make sense of [things] that aren’t non-physical but aren’t any of the above. If he compared his views to an existing philosophy then perhaps I could research it, but IIRC he hasn’t done that.
Do you understand what dark matter is?
Nobody knows what it is, finally, but physicists are able to use the phrase “dark matter” to communicate with each other—if only to theorise and express puzzlement.
Someone can use a term like “consciousness” or “qualia” or “sentience” to talk about something that is not fully understood.
There is no pain particle, but a particle/matter/energy could potentially be sentient and feel pain. All matter could be sentient, but how would we detect that? Perhaps the brain has found some way to measure it in something, and to induce it in that same thing, but how it becomes part of a useful mechanism for controlling behaviour would remain a puzzle. Most philosophers talk complete and utter garbage about sentience and consciousness in general, so I don’t waste my time studying their output, but I’ve heard Chalmers talk some sense on the issue.
Looks like it—I use the word to mean sentience. A modelling program modelling itself won’t magically start feeling anything but merely builds an infinitely recursive database.
You use the word “sentience” to mean sentience? Tarski’s sentences don’t convey any information beyond a theory of truth.
Also, we’re modeling programs that model themselves, and we don’t fall into infinite recursion while doing so, so clearly it’s not necessarily true that any self-modeling program will result in infinite recursion.
“Sentience” is related to “sense”. It’s to do with feeling, not congition. “A modelling program modelling itself won’t magically start feeling anything ”. Note that the argument is about where the feeling comes from, not about recursion.
What is a feeling, except for an observation? “I feel warm” means that my heat sensors are saying “warm” which indicates that my body has a higher temperature than normal. Internal feelings (“I feel angry”) are simply observations about oneself, which are tied to a self-model. (You need a model to direct and make sense of your observations, and your observations then go on to change or reinforce your model. Your idea-of-your-current-internal-state is your emotional self-model.)
Maybe you can split this phenomenon into two parts and consider each on their own, but as I see it, observation and cognition are fundamentally connected. To treat the observation as independent of cognition is too much reductionism. (Or at least too much of a wrong form of reductionism.)
“Clarification: by “pattern” I mean an arrangement of parts where the important qualities of the arrangement, the qualities that we use to determine whether it is [a thing] or not, are more dependent on the arrangement itself than on the internal workings of each part. Anything where the whole is more than the parts, one might say, but that would depend on what is meant by “more”.”
There is no situation where the whole is more than the parts—if anything new is emerging, it is a new part coming from somewhere not previously declared.
“You didn’t answer my question. Would pain still hurt? Would food still taste good? And so on. You have an internal experience, and it won’t go away even if you are a purely physical thing made out of mere ordinary atoms moving mindlessly.”
No—it wouldn’t hurt and all other feelings would be imaginary too. The reason they feel too real for that to be the case though is an indication that they are real.
“Is it wrong to press keys on the computer which keeps displaying the word “Ouch!”?” --> That depends on whether I have reason to think that the computer is simulating a conscious being, changing the simulation depending on my input, and then printing a text-representation of the conscious being’s experience or words.”
So if it’s just producing fake assertions, it isn’t wrong. And if we are just producing fake assertions, there is nothing wrong about “torturing” people either.
“Is it wrong to kick a box which keeps saying “Ouch!”? It could have a person inside, or just a machine programmed to play a recorded “ouch” sound whenever the box shakes. (What I mean by this is that your thought experiment doesn’t indicate much about computers—the same issue could be found with about as much absurdity elsewhere.)”
If we have followed the trail to see how the data is generated, we are not kicking a box with unknown content—if the trail shows us that the data is nothing but fake assertions, we are kicking a non-conscious box.
“Nobody’s saying that sentience doesn’t have any causal role on things. That’s insane. How could we talk about sentience if sentience couldn’t affect the world?”
In which case we should be able to follow the trail and see the causation in action, thereby either uncovering the mechanism of sentience or showing that there isn’t any.
“I think that you’re considering feelings to be ontologically basic, as if you could say “I feel pain” and be wrong, not because you are lying but because there’s no Pain inside your brain. Thoughts, feelings, all these internal things are the brain’s computations themselves. It doesn’t have to accurately record an external property—it just has to describe itself.”
If you’re wrong in thinking you feel pain, there is no pain.
“Perhaps people disagree with you about the relative size of mysteries. That should be a possibility that you consider before assuming that something isn’t important because it hasn’t been Up In Golden Lights to the point that you’ve heard of it.
What are you on about—it’s precisely because this is the most important question of them all that it should be up in golden lights.
“(And anyway, GEB won the Pulitzer Prize! It’s been called a major literary event!”
All manner of crap wins prizes of that kind.
″...it’s not worth it to you to spend half a minute on its Wikipedia page before rejecting it simply because you’ve never heard of it?)”
If it had a model showing the role of sentience in the system, the big question would have been answered and we wouldn’t have a continual stream of books and articles asking the question and searching desperately for answers that haven’t been found by anyone.
“What do you mean, “so many people are asking to see it”? And I’ve never claimed that it’s been “kept hidden away”.”
I mean exactly what I said—everyone’s asking for answers, and none of them have found answers where you claim they lie waiting to be discovered.
″ GEB is a fairly well-known book, and I haven’t even claimed that GEB’s description of thoughts is the best or most relevant model. That chapter is a popularization of neuropsychology to the point that a decently educated and thoughtful layman can understand it, and it’s necessarily less specific and detailed than the entire body of neuropsychological information. Go ask an actual neuropsychologist if you want to learn more. Just because people haven’t read your mind and dumped relatively niche information on your lap without you even asking them doesn’t mean that they don’t have it.”
It doesn’t answer the question. There are plenty of experts on the brain and its functionality, but none of them know how consciousness or sentience works.