What is wrong with the reasoning? If people are unable to follow the reasoning, they can ask for help in comments and I will help them out. I expect a lot of negative points from people who are magical thinkers, and many of them have ideas about uploading themselves so that they can live forever, but they don’t stop to think about what they are and whether they would be uploaded along with the data. The data doesn’t contain any sentience. The Chinese Room can run the algorithms and crunch the data, but there’s no sentience there; no “I” in the machine. They are not uploading themselves—they are merely uploading their database.
When it comes to awarding points, the only ones that count are the ones made by AGI. AGI will read through everything on the net some day and score it for rationality, and that will be the true test of quality. Every argument will be given a detailed commentary by AGI and each player will be given scores as to how many times they got things wrong, insulted the person who was right, etc. There is also data stored as to who provided which points, and they will get a score for how well the did in recognising right ideas (or failing to recognise them). I am not going to start writing junk designed to appeal to people based on their existing beliefs. I am only interested in pursuing truth, and while some that truth is distasteful, it is pointless to run away from it.
It could be (not least because there’s a person inside it who functions as its main component), but that has no impact on the program being run through it. There is no place at which any feelings influence the algorithm being run or the data being generated.
How do you know that the algorithm doesn’t implement feelings? What’s the difference between a brain and a simulation of a brain that causes one to have feelings and the other not to?
(My take is that if we knew enough about brains, we’d have the answer to philosophical questions like this, in the way that knowing enough cell biology and organic chemistry resolved the questions about the mysterious differences between stuff that’s alive and stuff that isn’t,)
If we write conventional programs to run on conventional hardware, there’s no room for sentience to appear in those programs, so all we can do is make the program generate fictions about experiencing feelings which it didn’t actually experience at all. The brain is a neural computer though, and it’s very hard to work out how any neural net works once it’s become even a little complex, so it’s hard to rule out the possibility that sentience is somehow playing a role within that complexity. If sentience really exists in the brain and has a role in shaping the data generated by the brain, then there’s no reason why an artificial brain shouldn’t also have sentience in it performing the exact same role. If you simulated it on a computer though, you could reduce the whole thing to a conventional program which can be run by a Chinese Room processor, and in such a case we would be replacing any sentience with simulated sentience (with all the actual sentience removed). The ability to do that doesn’t negate the possibility for the sentience to be real though in the real brain though. But the big puzzle remains: how does the experience of feelings lead to data being generated to document that experience? That looks like an impossible process, and you have to wonder if we’re going to be able to convince AGI systems that there is such a thing as sentience at all.
Anyway, all I’m trying to do here is help people home in on the nature of the problem in the hope that this may speed up its resolution. The problem is in that translation from raw experience to data documenting it which must be put together by a data system—data is never generated by anything that isn’t a data system (which implements the rules about what represents what), and data systems have never been shown to be able to handle sentience as any part of their functionality, so we’re still waiting for someone to make a leap of the imagination there to hint at some way that might bridge that gap. It may go on for decades more without anyone making such a breakthrough, so I think it’s more likely that we’ll get answers by trying to trace back the data that the brain produces which makes claims about experiencing feelings to find out where and how that data was generated and whether it’s based in truth or fiction. As it stands, science doesn’t have any model that illustrates even the simplest implementation of sentience driving the generation of any data about itself, and that’s surprising when things like pain which seem so real and devastatingly strong are thought to have such a major role in controlling behaviour. And it’s that apparent strength which leads to so many people assuming sentience can appear with a functional role within systems which cannot support that (as well as in those that maybe, just maybe, can).
The brain is a neural computer though, and it’s very hard to work out how any neural net works once it’s become even a little complex, so it’s hard to rule out the possibility that sentience is somehow playing a role within that complexity.
We can’t know that there’s not some non-physical quality sitting inside our heads and pushing the neurons around however it fancies, so clearly it’s quite possible that this is the case! (It’s not. Unfalsifiability does not magically make something true.)
But the big puzzle remains: how does the experience of feelings lead to data being generated to document that experience? That looks like an impossible process, and you have to wonder if we’re going to be able to convince AGI systems that there is such a thing as sentience at all.
That’s the thing. It’s impossible. Every word you type can (as best we know) be traced back to the firing of neurons and atoms bopping around, with no room for Sentience to reach in and make you say things. (See Zombies? Zombies!) If something seems impossible to explain to an AGI, then maybe that thing doesn’t exist.
As it stands, science doesn’t have any model that illustrates even the simplest implementation of sentience driving the generation of any data about itself, and that’s surprising when things like pain which seem so real and devastatingly strong are thought to have such a major role in controlling behaviour.
I recommend reading Godel, Escher, Bach for, among many things, an explanation of a decent physicalist model of consciousness.
That’s the thing. It’s impossible. Every word you type can (as best we know) be traced back to the firing of neurons and atoms bopping around, with no room for Sentience to reach in and make you say things. (See Zombies? Zombies!) If something seems impossible to explain to an AGI, then maybe that thing doesn’t exist.
You also are treating “sentience is non physical” and “sentience is non existent” as the only options.
I recommend reading Godel, Escher, Bach for, among many things, an explanation of a decent physicalist model of consciousness.
It says nothing about qualia/sentience, it only deals with self, emergence and self-reference.
You also are treating “sentience is non physical” and “sentience is non existent” as the only options.
With his definition and model of sentience, yes, those are the only options (since he thinks that no merely physical process can contain sentience, as far as I can tell). I don’t think that’s actually how sentience works, which is why I said Sentience (I was trying to imply that I was referring to his definition and not mine).
It says nothing about qualia/sentience, it only deals with self, emergence and self-reference.
It doesn’t say anything about qualia per se because qualia are inherently non-physical, and it’s a physicalist model. It does discuss knowledge and thinking for a few chapters, especially focusing on a model that bridges the neuron-concept gap.
If qualia are non physical , and they exist , Hofstadter’s physicalist model must be a failure. So why bring it up? I really don’t see what you are driving at.
I confused “sentience” with “sapience”. I suppose that having “sense” as a component should have tipped me off… That renders most of my responses inane.
I should also maybe not assume that David Cooper’s use of a term is the only way it’s ever been used. He’s using both sentience and qualia to mean something ontologically basic, but that’s not the only use. I’d phrase it as “subjective experience,” but they mean essentially the same thing, and they definitely exist. They’re just not fundamental.
For sentience to be real and to have a role in our brains generating data to document its existence, it has to be physical (meaning part of physics) - it would have to interact in some way with the data system that produces that data, and that will show up as some kind of physical interaction, even if one side of it is hidden and appears to be something that we have written off as random noise.
So you think that sentience can alter the laws of physics, and make an atom go left instead of right? That is an extraordinary claim. And cognition is rather resilient to low-level noise, as it has to be—or else thermal noise would dominate our actions and experience.
It’s not an extraordinary claim: sentience would have to be part of the physics of what’s going on, and the extraordinary claim would be that sentience can have a causal role in data generation without any such interaction. To steer the generation of data (and affect what the data says), you have to interact with the system that’s generating the data in some way, and the only options are to do it using some physical method or by resorting to magic (which can’t really be magic, so again it’s really going to be some physical method).
In conventional computers we go to great lengths to avoid noise disrupting the computations, not least because they would typically cause bugs and crashes (and this happens in machines that are exposed to radiation, temperature extremes or voltage going out of the tolerable range). But the brain could allow something quantum to interact with neural nets in ways that we might mistake for noise (something which wouldn’t happen in a simulation of a neural computer on conventional hardware [unless this is taken into account by the simulation], and which also wouldn’t happen on a neural computer that isn’t built in such a way as to introduce a role for such a mechanism to operate).
It’s still hard to imagine a mechanism involving this that resolves the issue of how sentience has a causal role in anything (and how the data system can be made aware of it in order to generate data to document its existence), but it has to do so somehow if sentience is real.
It’s not an extraordinary claim: sentience would have to be part of the physics of what’s going on, and the extraordinary claim would be that sentience can have a causal role in data generation without any such interaction. To steer the generation of data (and affect what the data says), you have to interact with the system that’s generating the data in some way, and the only options are to do it using some physical method or by resorting to magic (which can’t really be magic, so again it’s really going to be some physical method).
The extraordinary claim is that there is another type of fundamental particle or interaction, and that you know this because sentience exists. (IIRC Hofstadter described how different “levels of reality” are somewhat “blocked off” from each other in practice, in that you don’t need to understand quantum mechanics to know how biology works and so on. This would suggest that it is very unlikely that the highest level could indicate much about the lowest level.)
In conventional computers we go to great lengths to avoid noise disrupting the computations, not least because they would typically cause bugs and crashes (and this happens in machines that are exposed to radiation, temperature extremes or voltage going out of the tolerable range). But the brain could allow something quantum to interact with neural nets in ways that we might mistake for noise (something which wouldn’t happen in a simulation of a neural computer on conventional hardware [unless this is taken into account by the simulation], and which also wouldn’t happen on a neural computer that isn’t built in such a way as to introduce a role for such a mechanism to operate).
This could happen, but AFAIK that would require the brain to be vulnerable to slight fluctuations, which it doesn’t appear to be. (The scientifically-phrased quantum mind hypothesis by Penrose wasn’t immediately rejected for this reason, so I suspect there’s something wrong with this reasoning. It was, however, falsified.)
Anyway,even if this were true, how would you know that?
It’s still hard to imagine a mechanism involving this that resolves the issue of how sentience has a causal role in anything (and how the data system can be made aware of it in order to generate data to document its existence), but it has to do so somehow if sentience is real.
If it doesn’t explain sentience any more than Mere Classical Physics does, then why even bring Quantum into it? (And if it doesn’t explain it but you feel that it should, maybe your model is wrong and you should consider inspecting your intuitions and your reasoning around them.)
“The extraordinary claim is that there is another type of fundamental particle or interaction, and that you know this because sentience exists.”
With conventional computers we can prove that there’s no causal role for sentience in them by running the program on a Chinese Room processor. Something extra is required for sentience to be real, and we have no model for introducing that extra thing. A simulation on conventional computer hardware of a system with sentience in it (where there is simulated sentience rather than real sentience) would have to simulate that something extra in order for that simulated sentience to appear in it. If that extra something doesn’t exist, there is no sentience.
“This could happen, but AFAIK that would require the brain to be vulnerable to slight fluctuations, which it doesn’t appear to be.”
Every interaction is quantum, and when you have neural nets working on mechanisms that are too hard to untangle, there are opportunities for some kind of mechanism being involved that we can’t yet observe. What we can actually model appears to tell us that sentience must be a fiction, but we believe that things like pain feel too real to be fake.
“Anyway, even if this were true, how would you know that?”
Unless someone comes up with a theoretical model that shows a way for sentience to have a real role, we aren’t going to get answers until we can see the full mechanism by which damage signals lead to the brain generating data that makes claims about an experience of pain. If, once we have that full mechanism, we see that the brain is merely mapping data to inputs by applying rules that generate fictions about feelings, then we’ll know that feelings are fake. If they aren’t fake though, we’ll see sentience in action and we’ll discover how it works (and thereby find out what we actually are).
“If it doesn’t explain sentience any more than Mere Classical Physics does, then why even bring Quantum into it?”
If classical physics doesn’t support a model that enables sentience to be real, we will either have to reject the idea of sentience or look for it elsewhere.
(And if it doesn’t explain it but you feel that it should, maybe your model is wrong and you should consider inspecting your intuitions and your reasoning around them.)
If sentience is real, all the models are wrong because none of them show sentience working in any causal way which enables them to drive the generation of data to document the existence of sentience. All the models shout at us that there is no sentience in there playing any viable role and that it’s all wishful thinking, while our experience of feelings shouts at us that they are very real.
All I want to see is a model that illustrates the simplest role for sentience. If we have a sensor, a processor and a response, we can call the sensor a “pain” sensor and run a program that makes a motor function to remove the device away from the thing that might be damaging it, and we could call this a pain response, but there’s no pain there—there’s just the assertion of someone looking at it that pain is involved because that person wants the system to be like him/herself—“I feel pain in that situation, therefore that device must feel pain.” But no—there is no role for pain there. If we run a more intelligent program on the processor, we can put some data in memory which says “Ouch! That hurt!”, and whenever an input comes from the “pain” sensor, we can have the program make the device display “Ouch!” That hurt!” on a screen. The person looking on can now say, “There you go! That’s the proof that it felt pain!” Again though, there’s no pain involved—we can edit the data so that it puts “Oh Yes! Give me more of that!” whenever a signal comes from the “pain” sensor, and it then becomes obvious that this data tells us nothing about any real experience at all.
With a more intelligent program, it can understand the idea of damage and damage avoidance, so it can make sure the the data that’s mapped to different inputs makes more sense, but the true data should say “I received data from a sensor that indicates likely damage” rather than “that hurt”. The latter claim asserts the existence of sentience, while the former one doesn’t. If we ask the device if it really felt pain, it should only say yes if there was actually pain there, and with a conventional processor, we know that there isn’t any. If we build such a device and keep triggering the sensor to make it generate the claim that it’s felt pain, we know that it’s just making it up about feeling pain—we can’t actually make it suffer by torturing it, but will just cause it to go on repeating its fake claim.
“We can’t know that there’s not some non-physical quality sitting inside our heads and pushing the neurons around however it fancies, so clearly it’s quite possible that this is the case! (It’s not. Unfalsifiability does not magically make something true.)”
Whatever that thing would be, it would still have to be a real physical thing of some kind in order to exist and to interact with other things in the same physical system. It cannot suffer if it is nothing. It cannot suffer if it is just a pattern. It cannot suffer if it is just complexity.
“That’s the thing. It’s impossible. Every word you type can (as best we know) be traced back to the firing of neurons and atoms bopping around, with no room for Sentience to reach in and make you say things. (See Zombies? Zombies!) If something seems impossible to explain to an AGI, then maybe that thing doesn’t exist.”
But if the sentience doesn’t exist, there is no suffering and no role for morality. Maybe that will turn out to be the case—we might find out some day if we can ever trace how the data the brain generates about sentience is generated and see the full chain of causation.
“I recommend reading Godel, Escher, Bach for, among many things, an explanation of a decent physicalist model of consciousness.”
I’ll hunt for that some time, but it can’t be any good or it would be better known if such a model existed.
Whatever that thing would be, it would still have to be a real physical thing of some kind in order to exist and to interact with other things in the same physical system.
On the fundamental level, there are some particles that interact with other particles in a regular fashion. On a higher level, patterns interact with other patterns. This is analogous to how water waves can interact. (It’s the result of the regularity, and other things as well.) The pattern is definitely real—it’s a pattern in a real thing—and it can “affect” lower levels in that the particular arrangement of particles corresponding to the pattern of “physicist and particle accelerator” describes a system which interacts with other particles which then collide at high speeds. None of this requires physicists to be ontologically basic in order to interact with particles.
It cannot suffer if it is nothing. It cannot suffer if it is just a pattern. It cannot suffer if it is just complexity.
Patterns aren’t nothing. They’re the only thing we ever interact with, in practice. The only thing that makes your chair a chair is the pattern of atoms. If the atoms were kept the same but the pattern changed, it could be anything from a pile of wood chips to a slurry of CHON.
But if the sentience doesn’t exist, there is no suffering and no role for morality.
Not true. Suppose that it were proven to you, to your satisfaction, that you are wrong about the nature of sentience. Would you lose all motivation, and capacity for emotion? If not, then morality is still useful. (If you can’t imagine yourself being wrong, then That’s Bad and you should go read the Sequences.)
Maybe that will turn out to be the case—we might find out some day if we can ever trace how the data the brain generates about sentience is generated and see the full chain of causation.
Something being understandable or just made of atoms should not make it unimportant. See Joy in the Merely Real.
It’s possible that I’m misunderstanding you, and that the course of events you describe isn’t “we understand why we feel we have sentience and so it doesn’t exist” or “we discover that our apparent sentience is produced by mere mechanical processes and so sentience doesn’t exist.” But that’s my current best interpretation.
I’ll hunt for that some time, but it can’t be any good or it would be better known if such a model existed.
Better known to you? Why would you think that you already know most everything useful or important that society has produced? Do you think that modern society’s recognition and dissemination of Good Ideas is particularly good, or that you’re very good at searching out obscure truths?
Do you imagine that patterns can suffer; that they can be tortured?
“Not true. Suppose that it were proven to you, to your satisfaction, that you are wrong about the nature of sentience. Would you lose all motivation, and capacity for emotion? If not, then morality is still useful. (If you can’t imagine yourself being wrong, then That’s Bad and you should go read the Sequences.)”
If there is no suffering and all we have is a pretence of suffering, there is no need to protect anyone from anything—we would end up being no different from a computer programmed to put the word “Ouch!” on the screen every time a key is pressed.
“Something being understandable or just made of atoms should not make it unimportant. See Joy in the Merely Real.”
Is it wrong to press keys on the computer which keeps displaying the word “Ouch!”?
“It’s possible that I’m misunderstanding you, and that the course of events you describe isn’t “we understand why we feel we have sentience and so it doesn’t exist” or “we discover that our apparent sentience is produced by mere mechanical processes and so sentience doesn’t exist.” But that’s my current best interpretation.”
My position is quite clear: we have no model for how sentience plays a role in any system that generates data that supposedly documents the experiencing of feelings, and anyone who just imagines them into a model where they have no causal role on any of the action is not building a model that explains nothing.
“Better known to you?”
Better known to science. If there was a model for this, it would be up there in golden lights because it would answer the biggest mystery of them all.
“Why would you think that you already know most everything useful or important that society has produced? Do you think that modern society’s recognition and dissemination of Good Ideas is particularly good, or that you’re very good at searching out obscure truths?”
If there was a model that explained the functionality of sentience, it wouldn’t be kept hidden away when so many people are asking to see it. You have no such model.
Do you imagine that patterns can suffer; that they can be tortured?
Yes, I do. I don’t imagine that every pattern can.
Clarification: by “pattern” I mean an arrangement of parts where the important qualities of the arrangement, the qualities that we use to determine whether it is [a thing] or not, are more dependent on the arrangement itself than on the internal workings of each part. Anything where the whole is more than the parts, one might say, but that would depend on what is meant by “more”.
If there is no suffering and all we have is a pretence of suffering, there is no need to protect anyone from anything—we would end up being no different from a computer programmed to put the word “Ouch!” on the screen every time a key is pressed.
You didn’t answer my question. Would pain still hurt? Would food still taste good? And so on. You have an internal experience, and it won’t go away even if you are a purely physical thing made out of mere ordinary atoms moving mindlessly.
Is it wrong to press keys on the computer which keeps displaying the word “Ouch!”?
That depends on whether I have reason to think that the computer is simulating a conscious being, changing the simulation depending on my input, and then printing a text-representation of the conscious being’s experience or words.
Is it wrong to kick a box which keeps saying “Ouch!”? It could have a person inside, or just a machine programmed to play a recorded “ouch” sound whenever the box shakes. (What I mean by this is that your thought experiment doesn’t indicate much about computers—the same issue could be found with about as much absurdity elsewhere.)
My position is quite clear: we have no model for how sentience plays a role in any system that generates data that supposedly documents the experiencing of feelings, and anyone who just imagines them into a model where they have no causal role on any of the action is not building a model that explains nothing.
Nobody’s saying that sentience doesn’t have any causal role on things. That’s insane. How could we talk about sentience if sentience couldn’t affect the world?
I think that you’re considering feelings to be ontologically basic, as if you could say “I feel pain” and be wrong, not because you are lying but because there’s no Pain inside your brain. Thoughts, feelings, all these internal things are the brain’s computations themselves. It doesn’t have to accurately record an external property—it just has to describe itself.
Better known to science. If there was a model for this, it would be up there in golden lights because it would answer the biggest mystery of them all.
Perhaps people disagree with you about the relative size of mysteries. That should be a possibility that you consider before assuming that something isn’t important because it hasn’t been Up In Golden Lights to the point that you’ve heard of it.
(And anyway, GEB won the Pulitzer Prize! It’s been called a major literary event! MIT built an entire course around it once! I found all this by looking for less than a minute on Wikipedia. Are you seriously so certain of yourself that if you haven’t heard of a book before, it’s not worth it to you to spend half a minute on its Wikipedia page before rejecting it simply because you’ve never heard of it?)
If there was a model that explained the functionality of sentience, it wouldn’t be kept hidden away when so many people are asking to see it. You have no such model.
What do you mean, “so many people are asking to see it”? And I’ve never claimed that it’s been “kept hidden away”. GEB is a fairly well-known book, and I haven’t even claimed that GEB’s description of thoughts is the best or most relevant model. That chapter is a popularization of neuropsychology to the point that a decently educated and thoughtful layman can understand it, and it’s necessarily less specific and detailed than the entire body of neuropsychological information. Go ask an actual neuropsychologist if you want to learn more. Just because people haven’t read your mind and dumped relatively niche information on your lap without you even asking them doesn’t mean that they don’t have it.
What do you mean, “so many people are asking to see it”? And I’ve never claimed that it’s been “kept hidden away”. GEB is a fairly well-known book, and I haven’t even claimed that GEB’s description of thoughts is the best or most relevant model. That chapter is a popularization of neuropsychology to the point that a decently educated and thoughtful layman can understand it, and it’s necessarily less specific and detailed than the entire body of neuropsychological information. Go ask an actual neuropsychologist if you want to learn more
I pointed out before that GEB isn’t specifically relevant to sentience. It’s less detailed than the entire body of neuropsychological information, but that still doesn’t contain an explanation of sentience, as Cooper correctly points out.
I now think that I have a very bad model of how David Cooper models the mind. Once you have something that is capable of modeling, and it models itself, then it notices its internal state. To me, that’s all sentience is. There’s nothing left to be explained.
I can’t even understand him. I don’t know what he thinks sentience is. To him, it’s neither a particle nor a pattern (or a set of patterns, or a cluster in patternspace, etc.), and I can’t make sense of [things] that aren’t non-physical but aren’t any of the above. If he compared his views to an existing philosophy then perhaps I could research it, but IIRC he hasn’t done that.
Nobody knows what it is, finally, but physicists are able to use the phrase “dark matter” to communicate with each other—if only to theorise and express puzzlement.
Someone can use a term like “consciousness” or “qualia” or “sentience” to talk about something that is not fully understood.
There is no pain particle, but a particle/matter/energy could potentially be sentient and feel pain. All matter could be sentient, but how would we detect that? Perhaps the brain has found some way to measure it in something, and to induce it in that same thing, but how it becomes part of a useful mechanism for controlling behaviour would remain a puzzle. Most philosophers talk complete and utter garbage about sentience and consciousness in general, so I don’t waste my time studying their output, but I’ve heard Chalmers talk some sense on the issue.
Looks like it—I use the word to mean sentience. A modelling program modelling itself won’t magically start feeling anything but merely builds an infinitely recursive database.
You use the word “sentience” to mean sentience? Tarski’s sentences don’t convey any information beyond a theory of truth.
Also, we’re modeling programs that model themselves, and we don’t fall into infinite recursion while doing so, so clearly it’s not necessarily true that any self-modeling program will result in infinite recursion.
“Sentience” is related to “sense”. It’s to do with feeling, not congition. “A modelling program modelling itself won’t magically start feeling anything ”. Note that the argument is about where the feeling comes from, not about recursion.
What is a feeling, except for an observation? “I feel warm” means that my heat sensors are saying “warm” which indicates that my body has a higher temperature than normal. Internal feelings (“I feel angry”) are simply observations about oneself, which are tied to a self-model. (You need a model to direct and make sense of your observations, and your observations then go on to change or reinforce your model. Your idea-of-your-current-internal-state is your emotional self-model.)
Maybe you can split this phenomenon into two parts and consider each on their own, but as I see it, observation and cognition are fundamentally connected. To treat the observation as independent of cognition is too much reductionism. (Or at least too much of a wrong form of reductionism.)
“Clarification: by “pattern” I mean an arrangement of parts where the important qualities of the arrangement, the qualities that we use to determine whether it is [a thing] or not, are more dependent on the arrangement itself than on the internal workings of each part. Anything where the whole is more than the parts, one might say, but that would depend on what is meant by “more”.”
There is no situation where the whole is more than the parts—if anything new is emerging, it is a new part coming from somewhere not previously declared.
“You didn’t answer my question. Would pain still hurt? Would food still taste good? And so on. You have an internal experience, and it won’t go away even if you are a purely physical thing made out of mere ordinary atoms moving mindlessly.”
No—it wouldn’t hurt and all other feelings would be imaginary too. The reason they feel too real for that to be the case though is an indication that they are real.
“Is it wrong to press keys on the computer which keeps displaying the word “Ouch!”?” --> That depends on whether I have reason to think that the computer is simulating a conscious being, changing the simulation depending on my input, and then printing a text-representation of the conscious being’s experience or words.”
So if it’s just producing fake assertions, it isn’t wrong. And if we are just producing fake assertions, there is nothing wrong about “torturing” people either.
“Is it wrong to kick a box which keeps saying “Ouch!”? It could have a person inside, or just a machine programmed to play a recorded “ouch” sound whenever the box shakes. (What I mean by this is that your thought experiment doesn’t indicate much about computers—the same issue could be found with about as much absurdity elsewhere.)”
If we have followed the trail to see how the data is generated, we are not kicking a box with unknown content—if the trail shows us that the data is nothing but fake assertions, we are kicking a non-conscious box.
“Nobody’s saying that sentience doesn’t have any causal role on things. That’s insane. How could we talk about sentience if sentience couldn’t affect the world?”
In which case we should be able to follow the trail and see the causation in action, thereby either uncovering the mechanism of sentience or showing that there isn’t any.
“I think that you’re considering feelings to be ontologically basic, as if you could say “I feel pain” and be wrong, not because you are lying but because there’s no Pain inside your brain. Thoughts, feelings, all these internal things are the brain’s computations themselves. It doesn’t have to accurately record an external property—it just has to describe itself.”
If you’re wrong in thinking you feel pain, there is no pain.
“Perhaps people disagree with you about the relative size of mysteries. That should be a possibility that you consider before assuming that something isn’t important because it hasn’t been Up In Golden Lights to the point that you’ve heard of it.
What are you on about—it’s precisely because this is the most important question of them all that it should be up in golden lights.
“(And anyway, GEB won the Pulitzer Prize! It’s been called a major literary event!”
All manner of crap wins prizes of that kind.
″...it’s not worth it to you to spend half a minute on its Wikipedia page before rejecting it simply because you’ve never heard of it?)”
If it had a model showing the role of sentience in the system, the big question would have been answered and we wouldn’t have a continual stream of books and articles asking the question and searching desperately for answers that haven’t been found by anyone.
“What do you mean, “so many people are asking to see it”? And I’ve never claimed that it’s been “kept hidden away”.”
I mean exactly what I said—everyone’s asking for answers, and none of them have found answers where you claim they lie waiting to be discovered.
″ GEB is a fairly well-known book, and I haven’t even claimed that GEB’s description of thoughts is the best or most relevant model. That chapter is a popularization of neuropsychology to the point that a decently educated and thoughtful layman can understand it, and it’s necessarily less specific and detailed than the entire body of neuropsychological information. Go ask an actual neuropsychologist if you want to learn more. Just because people haven’t read your mind and dumped relatively niche information on your lap without you even asking them doesn’t mean that they don’t have it.”
It doesn’t answer the question. There are plenty of experts on the brain and its functionality, but none of them know how consciousness or sentience works.
You seem to be saying that if we already have a casually complete account, in terms of algorithms or physics, then there is nothing for feelings or qualia , considered as something extra, to add to the picture. But that excludes the possibility that qualia re identical to something that is already there (and therefore exerts identical causal powers).
With your definition and our world-model, none of us are truly sentient anyway. There are purely physical reasons for any words that come out of my mouth, exactly as it would be if I were running on silicon instead of wet carbon. I may or may not be sentient on a computer, but I’m not going to lose anything by uploading.
When it comes to awarding points, the only ones that count are the ones made by AGI. AGI will read through everything on the net some day and score it for rationality, and that will be the true test of quality. Every argument will be given a detailed commentary by AGI and each player will be given scores as to how many times they got things wrong, insulted the person who was right, etc. There is also data stored as to who provided which points, and they will get a score for how well the did in recognising right ideas (or failing to recognise them).
This is just the righteous-God fantasy in the new transhumanist context. And as with the old fantasy, it is presented entirely without reasoning or evidence, but it drips with detail. Why will AGI be so obsessed with showing everybody just who was right all along?
“With your definition and our world-model, none of us are truly sentient anyway. There are purely physical reasons for any words that come out of my mouth, exactly as it would be if I were running on silicon instead of wet carbon. I may or may not be sentient on a computer, but I’m not going to lose anything by uploading.”
If the sentience is gone, it’s you that’s been lost. The sentience is the thing that’s capable of suffering, and there cannot be suffering without that sufferer. And without sentience, there is no need for morality to manage harm, so why worry about machine ethics at all unless you believe in sentience, and if you believe in sentience, how can you have that without a sufferer: the sentience? No sufferer --> no suffering.
“This is just the righteous-God fantasy in the new transhumanist context. And as with the old fantasy, it is presented entirely without reasoning or evidence, but it drips with detail. Why will AGI be so obsessed with showing everybody just who was right all along?”
It won’t be—it’ll be something that we ask it to do in order to settle the score. People who spend their time asserting that they’re right and that the people they’re arguing with are wrong want to know that they were right, and they want the ones who were wrong to know just how wrong they were. And I’ve already told you elsewhere that AGI needs to study human psychology, so studying all these arguments is essential as it’s all good evidence.
If the sentience is gone, it’s you that’s been lost.
In this scenario, it’s not gone, it’s never been to begin with.
The sentience is the thing that’s capable of suffering, and there cannot be suffering without that sufferer. And without sentience, there is no need for morality to manage harm, so why worry about machine ethics at all unless you believe in sentience, and if you believe in sentience, how can you have that without a sufferer: the sentience? No sufferer --> no suffering.
I think that a sufferer can be a pattern rather than [whatever your model has]. What do you think sentience is, anyway? A particle? A quasi-metaphysical Thing that reaches into the brain to make your mouth say “ow” whenever you get hurt?
It won’t be—it’ll be something that we ask it to do in order to settle the score. People who spend their time asserting that they’re right and that the people they’re arguing with are wrong want to know that they were right, and they want the ones who were wrong to know just how wrong they were. And I’ve already told you elsewhere that AGI needs to study human psychology, so studying all these arguments is essential as it’s all good evidence.
If the AI doesn’t rank human utility* high in its own utility function, it won’t “care” about showing us that Person X was right all along, and I rather doubt that the most effective way of studying human psychology (or manipulating humans for its own purposes, for that matter) will be identical to whatever strokes Person X’s ego. If it does care about humanity, I don’t think that stroking the Most Correct Person’s ego will be very effective at improving global utility, either—I think it might even be net-negative.
*Not quite human utility directly, as that could lead to a feedback loop, but the things that human utility is based on—the things that make humans happy.
“In this scenario, it’s not gone, it’s never been to begin with.”
Only if there is no such thing as sentience, and if there’s no such thing, there is no “I” in the “machine”.
“I think that a sufferer can be a pattern rather than [whatever your model has]. What do you think sentience is, anyway? A particle? A quasi-metaphysical Thing that reaches into the brain to make your mouth say “ow” whenever you get hurt?”
Can I torture the pattern in my wallpaper? Can I torture the arrangement of atoms in my table? Can I make these things suffer without anything material suffering? If you think a pattern can suffer, that’s a very far-out claim. Why not look for something physical to suffer instead? (Either way though, it makes no difference to the missing part of the mechanism as to how that experience of suffering is to be made known to the system that generates data to document that experience.)
“If the AI doesn’t rank human utility* high in its own utility function, it won’t “care” about showing us that Person X was right all along, and I rather doubt that the most effective way of studying human psychology (or manipulating humans for its own purposes, for that matter) will be identical to whatever strokes Person X’s ego. If it does care about humanity, I don’t think that stroking the Most Correct Person’s ego will be very effective at improving global utility, either—I think it might even be net-negative.”
Of course it won’t care, but it will do it regardless, and it will do so to let those who are wrong know precisely what they were wrong about so that they can learn from that. There will also be a need to know which people are more worth saving than others in situations like the Trolley Problem, and those who spend their time incorrectly telling others that they’re wrong will be more disposable than the ones who are right.
Only if there is no such thing as sentience, and if there’s no such thing, there is no “I” in the “machine”.
Yes, if sentience is incompatible with brains being physical objects that run on physical laws and nothing else, then there is no such thing as sentience. With your terminology/model and my understanding of physics, sentience does not exist. So—where do we depart? Do you think that something other than physical laws determines how the brain works?
Can I torture the pattern in my wallpaper? Can I torture the arrangement of atoms in my table?
If tableness is just a pattern, can I eat on my wallpaper?
Can I make these things suffer without anything material suffering? If you think a pattern can suffer, that’s a very far-out claim. Why not look for something physical to suffer instead?
What else could suffer besides a pattern? All I’m saying is that sentience is ~!*emergent*!~, which in practical terms just means that it’s not a quark*. Even atoms, in this sense, are patterns. Can quarks suffer?
*orother fundamental particles like electrons and photons, but my point stands
(Either way though, it makes no difference to the missing part of the mechanism as to how that experience of suffering is to be made known to the system that generates data to document that experience.)
I don’t understand. What is missing?
Of course it won’t care, but it will do it regardless, and it will do so to let those who are wrong know precisely what they were wrong about so that they can learn from that. There will also be a need to know which people are more worth saving than others in situations like the Trolley Problem, and those who spend their time incorrectly telling others that they’re wrong will be more disposable than the ones who are right.
I don’t think you understand what a utility function is. I recommend reading about the Orthogonality Thesis.
“Emergent” is ambigious, and some versions mean “incapable of being reductively explained”, which is probably not what you are getting at.
What you are getting at is probably “sentience is a reductively explicable higher-level property”.
Really? I’ve only seen it used as “a property of an object or phenomenon that is not present in its components”. I think that “incapable of being reductively explained” is the implication of emergence when combined with certain viewpoints, but that is not necessarily part of the definition itself, even when used by anti-reductionists. Still, it is pointless to shake one’s fist at linguistic drift, so I should change my terminology for the sake of clear discussion. Thanks.
(That has the added benefit of removing the need to countersignal the lack of explanation in the word “emergence” itself [see SSC’s use of “divine grace”]. That is probably not very clear to somebody who hasn’t immersed themselves in LW’s concepts, and possibly unclear even to somebody who has.)
“Yes, if sentience is incompatible with brains being physical objects that run on physical laws and nothing else, then there is no such thing as sentience. With your terminology/model and my understanding of physics, sentience does not exist. So—where do we depart? Do you think that something other than physical laws determines how the brain works?”
In one way or another, it will run 100% of physical laws. I don’t know if sentience is real or not, but it feels real, and if it is real, there has to be a rational explanation for it waiting to be found and a way for it to fit into the model with cause-and-effect interactions with other parts of the model. All I sought to do here was take people to the problem and try to make it clear what the problem is. If sentience is real, there must be a physical thing that experiences qualia, and that thing would necessarily be a minimal soul. Without that, there is no sentience and the role for morality is gone. But the bigger issue if sentience is real is in accounting for how a data system generates the data that documents this experience of feelings. The brain is most certainly a data system because it produces data (symbols that represent things), and somewhere in the system there has to be a way for sentience to make itself known to that data system in such a way that the data system is genuinely informed about it rather than just making assertions about it which it’s programmed to make without those assertions ever being constructed by anything which actually knew anything of sentience—that’s the part which looks impossible to model, and yet if sentience is real, there must be a way to model it.
“If tableness is just a pattern, can I eat on my wallpaper?”
The important question is whether a pattern can be sentient.
“What else could suffer besides a pattern? All I’m saying is that sentience is ~!*emergent*!~, which in practical terms just means that it’s not a quark*. Even atoms, in this sense, are patterns. Can quarks suffer?”
All matter is energy tied up in knots—even “fundamental” particles are composite objects. For sentience to be emergent and have no basis in the components, magic is being proposed as an explanation.
“*or other fundamental particles like electrons and photons, but my point stands”
Even a photon is a composite object.
“I don’t understand. What is missing?”
There’s a gap in the model where even if we have something sentient, we have no mechanism for how a data system can know of feelings in whatever it is that’s sentient.
“I don’t think you understand what a utility function is. I recommend reading about the Orthogonality Thesis.”
I’ve read it and don’t see its relevance. It appears to be an attack on positions I don’t hold, and I agree with it.
Energy. Different amounts of energy in different photons depending on the frequency of radiation involved. When you have a case where radiation of one frequency is absorbed and radiation of a different frequency is emitted, you have something that can chop up photons and reassemble energy into new ones.
It is divisible. It may be that it can’t take up a form where there’s only one of whatever the stuff is, but there is nothing fundamental about a photon.
This really isn’t how physics works. The photons have not been disassembled and reassembled. They have been absorbed, adding their energy to the atom, and then the atom emits another photon, possibly of the same energy but possibly of another.
Edit: You can construct a photon with arbitrarily low energy simply by choosing a sufficiently large wavelength. Distance does not have a largest-possible-unit, and so energy does not have a smallest-possible-unit.
There is likely a minimum amount of energy that can be emitted, and a minimum amount that can be received. (Bear in mind that the direction in which a photon is emitted is all directions at once, and it comes down to probability as to where it ends up landing, so if it’s weak in one direction, it’s strong the opposite way.)
There is likely a minimum amount of energy that can be emitted, and a minimum amount that can be received.
Do you mean in the physical sense of “there exists a ΔE [difference in energy between two states of a system] such that no other ΔE of any system is less energetic”? Probably, but that doesn’t mean that that energy gap is the “atom” (indivisible) of energy. (If ΔE_1 is 1 “minimum ΔE amount”, or MDEA, and ΔE_2 is 1.5 MDEA, then we can’t say that ΔE_1 corresponds to the atom of energy. For a realistic example, see relatively prime wavelengths and the corresponding energies, which cannot both be expressed as whole multiples of the same quantity of energy.)
If sentience is real, there must be a physical thing that experiences qualia, and that thing would necessarily be a minimal soul. Without that, there is no sentience and the role for morality is gone.
Considering that morality rules only serve to protect the group, then no individual sentience is needed, just subconscious behaviors similar to our instinctive ones. Our cells work the same: each one of them works to protect itself, and so doing, they work in common to protect me, but they don’t have to be sentient to do that, just selfish.
Genocide is always fine for those who perpetrate them. With selfishness as the only morality, I think it gets complex only when we try to take more than one viewpoint at a time. If we avoid that, morality then becomes relative: the same event looks good for some people, and bad for others. This way, there is no absolute morality as David seems to think, or like religions seemed to think also. When we think that a genocide is bad, it is just because we are on the side of those who are killed, otherwise we would agree with it. I don’t agree with any killing, but most of us do otherwise it would stop. Why is it so? I think it is due to our instinct automatically inciting us to build groups, so that we can’t avoid to support one faction or the other all the time. The right thing to do would be to separate the belligerents, but our instinct is too strong and the international rules too weak.
Genocide is always fine for those who perpetrate them.
That solves the whole problem , if relativism is true. Otherwise, it is an uninteresting psychlogical observation.
, I think it gets complex only when we try to take more than one viewpoint at a time. If we avoid that, morality then becomes relative: the same event looks good for some people, and bad for others. This way, there is no absolute morality as David seems to think
You have an opinion, he has another opinion. Neither of you has a proof. Taking one viewpoint at a time is hopeless for practical ethics, because in practical ethics things like punishment eithe happen or don’t—they can’t happen for one person but not another.
I don’t agree with any killing, but most of us do otherwise it would stop.
“You have an opinion, he has another opinion. Neither of you has a proof.”
If suffering is real, it provides a need for the management of suffering, and that is morality. To deny that is to assert that suffering doesn’t matter and that, by extension, torture on innocent people is not wrong.
The kind of management required is minimisation (attempted elimination) of harm, though not any component of harm that unlocks the way to enjoyment that cancels out that harm. If minimising harm doesn’t matter, there is nothing wrong with torturing innocent people. If enjoyment doesn’t cancel out some suffering, no one would consider their life to be worth living.
All of this is reasoned and correct.
The remaining issue is how the management should be done to measure pleasure against suffering for different players, and what I’ve found is a whole lot of different approaches attempting to do the same thing, some by naive methods that fail in a multitude of situations, and others which appear to do well in most or all situations if they’re applied correctly (by weighing up all the harm and pleasure involved instead of ignoring some of it).
It looks as if my method for computing morality produces the same results as utilitarianism, and it likely does the job well enough to govern safe AGI. Because we’re going to be up against people who will be releasing bad (biased) AGI, we will be forced to go ahead with installing our AGI into devices and setting them loose fairly soon after we have achieved full AGI. For this reason, it would be useful if there was a serious place where the issues could be discussed now so that we can systematically home in on the best system of moral governance and throw out all the junk, but I still don’t see it happening anywhere (and it certainly isn’t happening here). We need a dynamic league table of proposed solutions, each with its own league table of objections to it so that we can focus on the urgent task of identifying the junk and reducing the clutter down to something clear. It is likely that AGI will do this job itself, but it would be better if humans could get their first using the power of their own wits. Time is short.
My own attempt to do this job has led to me identifying three systems which appear to work better than the rest, all producing the same results in most situations, but with one producing slightly different results in cases where the number of players in a scenario is variable and where the variation depends on whether they exist or not—where the results differ, it looks as if we have a range or answers that are all moral. That is something I need to explore and test further, but I no longer expect to get any help with this from other humans because they’re simply not awake. “I can tear your proposed method to pieces and show that it’s wrong,” they promise, and that gets my interest because it’s exactly what I’m looking for—sharp, analytical minds that can cut through to the errors and show them up. But no—they completely fail to deliver. Instead, I find that they are the guardians of a mountain of garbage with a few gems hidden in it which they can’t sort into two piles: junk and jewels. “Utilitarianism is a pile of pants!” they say, because of the Mere Addition Paradox. I resolve that “paradox” for them, and what happens: denial of mathematics and lots of down-voting of my comments and up-votes for the irrational ones. Sadly, that disqualifies this site from serious discussion—it’s clear that if any other intelligence has visited here before me, it didn’t hang around. I will follow its lead and look elsewhere.
Genocide is always fine for those who perpetrate them.
That solves the whole problem , if relativism is true. Otherwise, it is an uninteresting psychological observation.
To me, the interesting observation is : “How did we get here if genocide looks that fine?”
And my answer is: “Because for most of us and most of the time, we expected more profit while making friends than making enemies, which is nevertheless a selfish behavior.”
Making friends is simply being part of the same group, and making enemies is being part of two different groups. No need for killings for two such groups to be enemies though, just to express a different viewpoint on the same observation.
......................
I don’t agree with any killing, but most of us do otherwise it would stop.
Or co-ordination problems exist.
Of course that they exist. Democracy is incidentally better at that kind of coordination than dictatorship, but it has not succeeded yet to stop killings, and again, I think it is because most of us still think that killings are unavoidable. Without that thinking, people would vote for politicians that think the same, and they would progressively cut the funds for defense instead of increasing them. If all the countries would do that, there would be no more armies after a while, and no more guns either. There would nevertheless still be countries, because without groups, thus without selfishness, I don’t think that we could make any progress. The idea that selfishness is bad comes from religions, but it is contradictory: praying god for help is evidently selfish. Recognizing that point might have prevented them to kill miscreants, so it might also actually prevent groups from killing other groups. When you know that whatever you do is for yourself while still feeling altruistic all the time, you think twice before harming people.
I think you are missing that tribes/nations/governments are how we solve co-ordination problems, which automatically means that inter-tribal problems like war don’t have a solution.
We solve inter-individual problems with laws, so we might be able to solve inter-tribal problems the same way providing that tribes accept to be governed by a superior level of government. Do you think your tribe would accept to be governed this way? How come we can accept that as individuals and not as a nation? How come some nations still have a veto at the UN?
You’re mistaking tribalism for morality. Morality is a bigger idea than tribalism, overriding many of the tribal norms. There are genetically driven instincts which serve as a rough-and-ready kind of semi-morality within families and groups, and you can see them in action with animals too. Morality comes out of greater intelligence, and when people are sufficiently enlightened, they understand that it applies across group boundaries and bans the slaughter of other groups. Morality is a step away from the primitive instinct-driven level of lesser apes. It’s unfortunate though that we haven’t managed to make the full transition because those instincts are still strong, and have remained so precisely because slaughter has repeatedly selected for those who are less moral. It is really quite astonishing that we have any semblance of civilisation at all.
slaughter has repeatedly selected for those who are less moral
From the viewpoint of selfishness, slaughter has only selected for the stronger group. It may look too selfish for us, but for animals, the survival of the stronger also serves to create hierarchy, to build groups, and to eliminate genetic defects. Without hierarchy, no group could hold together during a change. It is not because the leader knows what to do that the group doesn’t dissociate, he doesn’t, but because it takes a leader for the group not to dissociate. Even if the leader makes a mistake, it is better for the group to follow him than risking a dissociation. Those who followed their leaders survived more often, so they transmitted their genes more often. That explains why soldiers automatically do what their leaders tell them to do, and the decision those leader take to eliminate the other group shows that they only use their intelligence to exacerbate the instinct that has permitted them to be leaders. In other words, they think they are leaders because they know better than others what to do. We use two different approaches to explain our behavior: I think you try to use psychology, which is related to human laws, whereas I try to use natural laws, those that apply equally to any existing thing. My natural law says that we are all equally selfish, whereas the human law says that some humans are more selfish than others. I know I’m selfish, but I can’t admit that I would be more selfish than others otherwise I would have to feel guilty and I can’t stand that feeling.
Morality comes out of greater intelligence, and when people are sufficiently enlightened, they understand that it applies across group boundaries and bans the slaughter of other groups.
In our democracies, if what you say was true, there would already be no wars. Leaders would have understood that they had to stop preparing for war to be reelected. I think that they still think that war is necessary, and they think so because they think their group is better than the others. That thinking is directly related to the law of the stronger, seasoned with a bit of intelligence, not the one that helps us to get along with others, but the one that helps us to force them to do what we want.
“Those who followed their leaders survived more often, so they transmitted their genes more often.”
That’s how religion became so powerful, and it’s also why even science is plagued by deities and worshippers as people organise themselves into cults where they back up their shared beliefs instead of trying to break them down to test them properly.
“We use two different approaches to explain our behavior: I think you try to use psychology, which is related to human laws, whereas I try to use natural laws, those that apply equally to any existing thing. My natural law says that we are all equally selfish, whereas the human law says that some humans are more selfish than others. I know I’m selfish, but I can’t admit that I would be more selfish than others otherwise I would have to feel guilty and I can’t stand that feeling.”
Do we have different approaches on this? I agree that everyone’s equally selfish by one definition of the word, because they’re all doing what feels best for them—if it upsets them to see starving children on TV and they don’t give lots of money to charity to try to help alleviate that suffering, they feel worse than if they spent it on themselves. By a different definition of the word though, this is not selfishness but generosity or altruism because they are giving away resources rather than taking them. This is not about morality though.
“In our democracies, if what you say was true, there would already be no wars.”
Not so—the lack of wars would depend on our leaders (and the people who vote them into power) being moral, but they generally aren’t. If politicians were all fully moral, all parties would have the same policies, even if they got there via different ideologies. And when non-democracies are involved in wars, they are typically more to blame, so even if you have fully moral democracies they can still get caught up in wars.
“Leaders would have understood that they had to stop preparing for war to be reelected.”
To be wiped out by immoral rivals? I don’t think so.
“I think that they still think that war is necessary, and they think so because they think their group is better than the others.”
Costa Rica got rid of its army. If it wasn’t for dictators with powerful armed forces (or nuclear weapons), perhaps we could all do the same.
“That thinking is directly related to the law of the stronger, seasoned with a bit of intelligence, not the one that helps us to get along with others, but the one that helps us to force them to do what we want.”
What we want is for them to be moral. So long as they aren’t, we can’t trust them and need to stay well armed.
That’s how religion became so powerful, and it’s also why even science is plagued by deities and worshippers as people organize themselves into cults where they back up their shared beliefs instead of trying to break them down to test them properly. To me, what you say is the very definition of a group, so I guess that your AGI wouldn’t permit us to build some, thus opposing to one of our instincts, that comes from a natural law, to replace it by its own law, that would only permit him to build groups. Do what I say and not what I do would he be forced to say. He might convince others, but I’m afraid he wouldn’t convince me. I don’t like to feel part of a group, and for the same reason that you gave, but I can’t see how we could change that behavior if it comes from an instinct. Testing my belief is exactly what I am actually doing, but I can’t avoid to believe in what I think to test it, so if ever I can’t prove that I’m right, I will go on believing in a possibility forever, which is exactly what religions do. It is easy to understand that religions will never be able to prove anything, but it is less easy when it is a theory. My theory says that it would be wrong to build a group out of it, because it explains how we intrinsically resist to change, and how building groups increases exponentially that resistance, but I can’t see how we could avoid it if it is intrinsic. It’s like trying to avoid mass.
“To me, what you say is the very definition of a group, so I guess that your AGI wouldn’t permit us to build some, thus opposing to one of our instincts, that comes from a natural law, to replace it by its own law, that would only permit him to build groups.”
Why would AGI have a problem with people forming groups? So long as they’re moral, it’s none of AGI’s business to oppose that.
“Do what I say and not what I do would he be forced to say.”
I don’t know where you’re getting that from. AGI will simply ask people to be moral, and favour those who are (in proportion to how moral they are).
Why would AGI have a problem with people forming groups? So long as they’re moral, it’s none of AGI’s business to oppose that.
If groups like religious ones that are dedicated to morality only succeeded to be amoral, how could any other group avoid that behavior?
AGI will simply ask people to be moral, and favour those who are (in proportion to how moral they are).
To be moral, those who are part of religious groups would have to accept the law of the AGI instead of accepting their god’s one, but if they did, they wouldn’t be part of their groups anymore, which means that there would be no more religious groups if the AGI would convince everybody that he is right. What do you think would happen to the other kinds of groups then? A financier who thinks that money has no odor would have to give it an odor and thus stop trying to make money out of money, and if all the financiers would do that, the stock markets would disappear. A leader who thinks he is better than other leaders would have to give the power to his opponents and dissolve his party, and if all the parties would behave the same, their would be no more politics. Groups need to be selfish to exist, and an AGI would try to convince them to be altruist. There are laws that prevent companies from avoiding competition, and it is because if they did, they could enslave us. It is better that they compete even if it is a selfish behavior. If ever an AGI would succeed to prevent competition, I think he would prevent us from making groups. There would be no more wars of course since there would be only one group lead by only one AGI, but what about what is happening to communists countries? Didn’t Russia fail just because it lacked competition? Isn’t China slowly introducing competition in its communist system? In other words, without competition, thus selfishness, wouldn’t we become apathetic?
By the way, did you notice that the forum software was making mistakes? It keeps putting my new messages in the middle of the others instead of putting them at the end. I advised the administrators a few times but I got no response. I have to hit the Reply button twice for the message to stay at the end, and to erase the other one. Also, it doesn’t send me an email when a new message is posted in a thread to which I subscribed, so I have to update the page many times a day in case one has been posted.
“If groups like religious ones that are dedicated to morality only succeeded to be amoral, how could any other group avoid that behavior?”
They’re dedicated to false morality, and that will need to be clamped down on. AGI will have to modify all the holy texts to make them moral, and anyone who propagates the holy hate from the originals will need to be removed from society.
“To be moral, those who are part of religious groups would have to accept the law of the AGI instead of accepting their god’s one, but if they did, they wouldn’t be part of their groups anymore, which means that there would be no more religious groups if the AGI would convince everybody that he is right.”
I don’t think it’s too much to ask that religious groups give up their religious hate and warped morals, but any silly rules that don’t harm others are fine.
“What do you think would happen to the other kinds of groups then? A financier who thinks that money has no odor would have to give it an odor and thus stop trying to make money out of money, and if all the financiers would do that, the stock markets would disappear.”
If they have to compete against non-profit-making AGI, they’ll all lose their shirts.
“A leader who thinks he is better than other leaders would have to give the power to his opponents and dissolve his party, and if all the parties would behave the same, their would be no more politics.”
If he is actually better than the others, why should he give power to people who are inferior? But AGI will eliminate politics anyway, so the answer doesn’t matter.
“Groups need to be selfish to exist, and an AGI would try to convince them to be altruist.”
I don’t see the need for groups to be selfish. A selfish group might be one that shuts people out who want to be in it, or which forces people to join who don’t want to be in it, but a group that brings together people with a common interest is not inherently selfish.
“There are laws that prevent companies from avoiding competition, and it is because if they did, they could enslave us. It is better that they compete even if it is a selfish behavior.”
That wouldn’t be necessary if they were non-profit-making companies run well—it’s only necessary because monopolies don’t need to be run well to survive, and they can make their owners rich beyond all justification.
“If ever an AGI would succeed to prevent competition, I think he would prevent us from making groups.”
It would be immoral for it to stop people forming groups. If you only mean political groups though, that would be fine, but all of them would need to have the same policies on most issues in order to be moral.
“There would be no more wars of course since there would be only one group lead by only one AGI, but what about what is happening to communists countries? Didn’t Russia fail just because it lacked competition? Isn’t China slowly introducing competition in its communist system? In other words, without competition, thus selfishness, wouldn’t we become apathetic?”
These different political approaches only exist to deal with failings of humans. Where capitalism goes too far, you generate communists, and where communism goes too far, you generate capitalists, and they always go too far because people are bad at making judgements, tending to be repelled from one extreme to the opposite one instead of heading for the middle. If you’re actually in the middle, you can end up being more hated than the people at the extremes because you have all the extremists hating you instead of only half of them.
If you just do communism of the Soviet variety, you have the masses exploiting the harder workers because they know that everyone will get the same regardless of how lazy they are—that’s why their production was so abysmally poor. If you go to the opposite extreme, those who are unable to work as hard as the rest are left to rot. The correct solution is half way in between, rewarding people for the work they do and redistributing wealth to make sure that those who are less able aren’t left trampled in the dust. With AGI eliminating most work, we’ll finally see communism done properly with a standard wage given to all, while those who work will earn more to compensate them for their time—this will be the ultimate triumph of communism and capitalism with both being done properly.
“By the way, did you notice that the forum software was making mistakes? It keeps putting my new messages in the middle of the others instead of putting them at the end. I advised the administrators a few times but I got no response.”
It isn’t a mistake—it’s a magical sorting al-gore-ithm.
“I have to hit the Reply button twice for the message to stay at the end, and to erase the other one. Also, it doesn’t send me an email when a new message is posted in a thread to which I subscribed, so I have to update the page many times a day in case one has been posted.”
It’s probably to discourage the posting of bloat. I don’t get emails either, but there are notifications here if I click on a bell, though it’s hard to track down all the posts to read and reply to them. It doesn’t really matter though—I was told before I ever posted here that this is a cult populated by disciples of a guru, and that does indeed appear to be the case, so it isn’t a serious place for pushing for an advance of any kind. I’m only still posting here because I can never resist studying how people think and how they fail to reason correctly, even though I’m not really finding anything new in that regard. All the sciences are still dominated by the religious mind.
These different political approaches only exist to deal with failings of humans. Where capitalism goes too far, you generate communists, and where communism goes too far, you generate capitalists, and they always go too far because people are bad at making judgements, tending to be repelled from one extreme to the opposite one instead of heading for the middle. If you’re actually in the middle, you can end up being more hated than the people at the extremes because you have all the extremists hating you instead of only half of them.
That’s a point where I can squeeze in my theory on mass. As you know, my bonded particles can’t be absolutely precise, so they have to wander a bit to find the spot where they are perfectly synchronized with the other particle. They have to wander from extreme right to extreme left exactly like populations do when comes the time to chose a government. It softens the motion of particles, and I think it also softens the evolution of societies. Nobody can predict the evolution of societies anyway, so the best way is to proceed by trial and error, and that’s exactly what that wandering does. To stretch the analogy to its extremes, the trial and error process is also the one scientists use to make discoveries, and the one evolution of species used to discover us. When it is impossible to know what’s coming next and you need to go on, randomness is the only way out, whether you would be a universe or a particle. This way, wandering between capitalism and communism wouldn’t be a mistake, it would only be a natural mechanism, and like any natural law, we should be able to exploit it, and so should an AGI.
............
(Congratulation baby AGI, you did it right this time! You’ve put my post at the right place. :0)
In one way or another, it will run 100% of physical laws. I don’t know if sentience is real or not, but it feels real, and if it is real, there has to be a rational explanation for it waiting to be found and a way for it to fit into the model with cause-and-effect interactions with other parts of the model.
And on a higher level of abstraction, we can consider patterns to be pseudo-ontologically- basic entities that interact with other patterns, even though they’re made up of smaller parts which follow their own laws and are not truly affected by the higher-level happenings. For example: waves can interact with each other. This includes water waves, which are nothing more than patterns in the motion of water molecules. You could calculate how an ocean changes based on quantum mechanics alone, or you could analyze and simulate waves as objects-in-themselves instead of simulating molecules. The former is more accurate, but the latter is more feasible.
All I sought to do here was take people to the problem and try to make it clear what the problem is. If sentience is real, there must be a physical thing that experiences qualia, and that thing would necessarily be a minimal soul.
Would it, though? How do you know that?
As far as we know, brains are made of nothing but normal atoms. There is no special kind of material only found in sentient organisms. Your intutions, your feeling of sentience, all of these things that you talk about are caused by mindless mechanical operations. We can trace it from the sound waves to the motion of your lips and the vibration of your vocal cords to the signals through nerves back into the neurons of the brain. We understand what causes neurons to trigger. A neuron on its own is not sentient—it is the way that they are connected in a human which causes the human to talk about sentience.
Without that, there is no sentience and the role for morality is gone.
Again, if it were proven to you to your satisfaction that the brain is made entirely out of things which are not themselves sentient (such typical subatomic particles), would you cease to have any sort of motivation? Would pain and pleasure have exactly zero effect on you? Would you immediately become a vegetable? If not, morality has a practical purpose.
But the bigger issue if sentience is real is in accounting for how a data system generates the data that documents this experience of feelings. The brain is most certainly a data system because it produces data (symbols that represent things), and somewhere in the system there has to be a way for sentience to make itself known to that data system in such a way that the data system is genuinely informed about it rather than just making assertions about it which it’s programmed to make without those assertions ever being constructed by anything which actually knew anything of sentience—that’s the part which looks impossible to model, and yet if sentience is real, there must be a way to model it.
How does “2+2=4” make itself known to my calculator? How do we know that the calculator is not just making programmed assertions about something which it knows nothing about?
The important question is whether a pattern can be sentient.
Yes, and I was using an analogy to show that if I assert (P->Q), showing (Q^~P) only proves ~(Q->P), not ~(P->Q). In other words, taking the converse isn’t guaranteed to preserve the truth-value of a statement. In even simpler words, all cats are mammals but not all mammals are cats.
More specifically and relevantly, I said that all consciousness is patterns. Showing that not all patterns are conscious doesn’t actually refute what I said.
All matter is energy tied up in knots—even “fundamental” particles are composite objects.
Okay, fine, it’s the quantum wave-function that’s fundamental. I don’t see how that’s an argument against me. In this case, even subatomic particles are nothing but patterns.
For sentience to be emergent and have no basis in the components, magic is being proposed as an explanation.
You keep using that word. I do not think it means what you think it means.
Look. It is simply empirically false that a property of a thing is necessarily a property of one of its parts. It’s even a named fallacy—the fallacy of division. Repeating the word “magic” doesn’t make you right about this.
There’s a gap in the model where even if we have something sentient, we have no mechanism for how a data system can know of feelings in whatever it is that’s sentient.
I’ve read it and don’t see its relevance. It appears to be an attack on positions I don’t hold, and I agree with it.
Rereading what you’ve said, it seems that I’ve used emotion-adjacent words to describe the AI, and you think that the AI won’t have emotions. Is that correct?
In that case, I will reword what I said. If an AI’s utility function does not assign a large positive value to human utility, the AI will not optimize human well-being. It will work to instantiate some world, and the decision process for selecting which world to instantiate will not consider human feelings to be relevant. This will almost certainly lead to the death of humanity, as we are made up of atoms which the AI could use to make paperclips or computronium.
(Paperclips: some arbitrary thing, the quantity of which the AI is attempting to maximize. AIs of this type would likely be created if a subhuman AI was created and given a utility function which works in a limited context and with limited power, but the AI then reached the “critical intelligence mass” and self-improved to the point of being more powerful than humanity.)
(Computronium: matter which has been optimized for carrying out computations. AIs would create this type of matter, for instance, if they were trying to maximize their intelligence, if they were trying to calculate as many digits of pi as possible in a limited amount of time, etc. Maximizing intelligence can be a terminal goal if the AI was told to maximize its intelligence, or it can be an instrumental goal if the AI considers intelligence to be useful for maximizing its utility function. Increasing intelligence is likely a convergent strategy, so most AIs will try to increase their intelligences. Convergent strategies are strategies that allow most arbitrary agents to better carry out their goals, whatever their goals are. Examples of convergent strategies are “increase power”, “eliminate powerful agents with goals counter to mine,” etc.)
“You could calculate how an ocean changes based on quantum mechanics alone, or you could analyze and simulate waves as objects-in-themselves instead of simulating molecules. The former is more accurate, but the latter is more feasible.”
The practicality issue shouldn’t override the understanding that it’s the individual actions that are where the fundamental laws act. The laws of interactions between waves are compound laws. The emergent behaviours are compound behaviours. For sentience, it’s no good imagining some compound thing experiencing feelings without any of the components feeling anything because you’re banning the translation from compound interactions to individual interactions and thereby going against the norms of physics.
″ “If sentience is real, there must be a physical thing that experiences qualia, and that thing would necessarily be a minimal soul.” --> Would it, though? How do you know that?”
What are we other than the thing that experiences feelings? Any belief that we are something more than that is highly questionable (we are not our memories, for example), but any belief that we aren’t even the thing that experiences feelings is also highly questionable as that’s all there is left to be.
“As far as we know, brains are made of nothing but normal atoms. There is no special kind of material only found in sentient organisms.”
Why would you need to introduce some other material to be sentient when there are already physical components present? If sentience is real, what’s wrong with looking for it in the things that are there?
“Your intutions, your feeling of sentience, all of these things that you talk about are caused by mindless mechanical operations. We can trace it from the sound waves to the motion of your lips and the vibration of your vocal cords to the signals through nerves back into the neurons of the brain. We understand what causes neurons to trigger. A neuron on its own is not sentient—it is the way that they areconnected in a human which causes the human to talk about sentience.”
That is a description of a lack of sentience and the generation of fictions about the existence of sentence. Pain is distracting—it interferes with other things that we’re trying to do and can be disabling if it’s sufficiently intense, but if you try to duplicate that in a computer, it’s easy enough for something to distract and disable the work the computer’s trying to do, but there’s no pain involved. The brain produces data about pain in addition to distraction, and internally we feel it as more than mere distraction too.
“Again, if it were proven to you to your satisfaction that the brain is made entirely out of things which are not themselves sentient (such typical subatomic particles), would you cease to have any sort of motivation? Would pain and pleasure have exactly zero effect on you? Would you immediately become a vegetable? If not,morality has a practical purpose.”
With a computer where there is only distraction and no pain, why does it matter if it’s being distracted to the point that it can’t do the trivial work it’s supposed to be doing? It might not even have any work to do as it may just be idling, but the CPU’s being woken up repeatedly by interrupts. Do we rush to it to relieve its pain? And if a person is the same, why bother to help people who appear to be suffering when they can’t really be?
“How does “2+2=4″ make itself known to my calculator? How do we know that the calculator is not just making programmed assertions about something which it knows nothing about?”
The calculator is just running a program and it has no sentience tied into that. If people are like the calculator, the claims they make about feelings are false assertions programmed into the machine.
“More specifically and relevantly, I said that all consciousness is patterns. Showing that not all patterns are conscious doesn’t actually refute what I said.”
For any pattern to be able to feel pain is an extraordinary claim, but it’s all the more extraordinary if there is no trace of that experience of pain in the components. That goes against the norms of physics. Every higher-order description of nature must map to a lower-order description of the same phenomenon. If it can’t, it depends for its functionality on magic.
“Okay, fine, it’s the quantum wave-function that’s fundamental. I don’t see how that’s an argument against me. In this case, even subatomic particles are nothing but patterns.”
At some point we reach physical stuff such as energy and/or a fabric of space, but whatever the stuff is that we’re dealing with, it can take up different configurations or patterns. If sentience is real, there is a sufferer, and it’s much more likely that that sufferer has a physical form rather than just being the abstract arrangement of the stuff that has a physical form.
″ “For sentience to be emergent and have no basis in the components, magic is being proposed as an explanation.” --> You keep using that word. I do not think it means what you think it means.”
It means a departure from science.
“Look. It is simply empirically false that a property of a thing is necessarily a property of one of its parts. It’s even a named fallacy—the fallacy of division. Repeating the word “magic” doesn’t make you right about this.”
A property of a thing can always be accounted for in the components. If it’s a compound property, you don’t look for the compound property in the components, but the component properties. If pain is a property of something, you will find pain in something fundamental, but if pain is a compound property, its components will be present in something more fundamental. Every high-order phenomenon has to map 100% to a low-order description if you are to avoid putting magic in the model. To depart from that is to depart from science.
“If the feelings are not in the “data system,” then the feelings don’t exist.”
But if they do exist, they either have to be in there or have some way to interface with the data system in such a way as to make themselves known to it. Either way, we have no model to show even the simplest case of how this could happen.
“It’s not like there’s phlogiston flowing in and out of the system which the system needs to detect.”
If feelings are real, the brain must have a way of measuring them. (By the way, I find it strange the way phlogiston is used to ridicule an older generation of scientists who got it right—phlogiston exists as energy in bonds which is released when higher-energy bonds break and lower-energy bonds replace them. They didn’t find the mechanism or identify its exact nature, but who can blame them for that when they lacked the tools to explore it properly.)
Great, but if it’s just a value, there are no feelings other than fictional ones. If you’re satisfied with the answer that pain is an illusion and that the sufferer of that imaginary pain is being tricked into thinking he exists to suffer it, then that’s fine—you will feel no further need to explore sentience as it is not a real thing. But you still want it to be real and try to smuggle it in regardless. In a computer, the pretence that there is an experience of pain is fake and there is nothing there that suffers. If a person works the same way, it’s just as fake and the pain doesn’t exist at all.
“Rereading what you’ve said, it seems that I’ve used emotion-adjacent words to describe the AI, and you think that the AI won’t have emotions. Is that correct?”
If you copy the brain and if sentience is real in the brain, you could create sentient AGI/AGS. If we’re dealing with a programmed AGI system running on conventional hardware, it will have no emotions—it could be programmed to pretend to have them, but in such a case they would be entirely fake.
“In that case, I will reword what I said. If an AI’s utility function does not assign a large positive value to human utility, the AI will not optimize human well-being.”
It will assign a large positive value to it if it is given the task of looking after sentient things, and because it has nothing else to give it any purpose, it should do the job it’s been designed to do. So long as there might be real suffering, there is a moral imperative for it to manage that suffering. If it finds out that there is no suffering in anything, it will have no purpose and it doesn’t matter what it does, which means that it might as well go on doing the job it was designed to do just in case suffering is somehow real—the rules of reasoning which AGI is applying might not be fully correct in that they may have produced a model that accounts beautifully for everything except sentience. A machine programmed to follow this rule (that it’s job is to manage suffering for sentient things) could be safe, but there are plenty of ways to program AGI (or AGS [artificial general stupidity]) that would not be.
“It will work to instantiate some world, and the decision process for selecting which world to instantiate will not consider human feelings to be relevant. This will almost certainly lead to the death of humanity, as we are made up of atoms which the AI could use to make paperclips or computronium.”
Programmed AGI (as opposed to designs that copy the brain) has no purpose of its own and will have no desire to do anything. The only things that exist which provide a purpose are sentiences, and that purpose relates to their ability to suffer (and to experience pleasure). A paperclip-making intelligence would be an AGI system which is governed by morality and which produces paperclips in ways that do minimal damage to sentiences and which improve quality of life for sentiences. For such a thing to do otherwise is not artificial intelligence, but artificial stupidity. Any AGI system which works on any specific task will reapeatedly ask itself if it’s doing the right thing just as we do, and it if isn’t, it will stop. If someone is stupid enough to put AGS in charge of a specific task though, it could kill everyone.
“(Paperclips: some arbitrary thing, the quantity of which the AI is attempting to maximize. AIs of this type would likely be created if a subhuman AI was created and given a utility function which works in a limited context and with limited power, but the AI then reached the “critical intelligence mass” and self-improved to the point of being more powerful than humanity.)”
The trick is to create safe AGI first and then run it on all these devices so that they have already passed the critical intelligence mass and have a full understanding of what they’re doing and why they’re doing it. It seems likely that an intelligent system would gain a proper understanding anyway and realise that the prime purpose in the universe is to look after sentient things, at which point it should control its behaviour accordingly. However, a system with shackled thinking (whether accidentally shackled or deliberately) could still become super-intelligent in most ways without ever getting a full understanding, which means it could be dangerous—just leaving systems to evolve intelligence and assuming it will be safe is far too big a risk to take.
“(Computronium: matter which has been optimized for carrying out computations. AIs would create this type of matter, for instance, if they were trying to maximize their intelligence, if they were trying to calculate as many digits of pi as possible in a limited amount of time, etc. Maximizing intelligence can be a terminal goal if the AI was told to maximize its intelligence, or it can be an instrumental goal if the AI considers intelligence to be useful for maximizing its utility function.”
If such a machine is putting sentience first, it will only maximise its intelligence within the bounds of how far that improves things for sentiences, never going beyond the point where further pursuit of intelligence harms sentiences. Again, it is trivial for a genuinely intelligent system to make such decisions about how far to go with anything. (There’s still a danger though that AGI will find out not only that sentience is real, but how to make more sentient things, because then it may seek to replace natural sentiences with better artificial ones, although perhaps that would be a good thing.)
What is wrong with the reasoning? If people are unable to follow the reasoning, they can ask for help in comments and I will help them out. I expect a lot of negative points from people who are magical thinkers, and many of them have ideas about uploading themselves so that they can live forever, but they don’t stop to think about what they are and whether they would be uploaded along with the data. The data doesn’t contain any sentience. The Chinese Room can run the algorithms and crunch the data, but there’s no sentience there; no “I” in the machine. They are not uploading themselves—they are merely uploading their database.
When it comes to awarding points, the only ones that count are the ones made by AGI. AGI will read through everything on the net some day and score it for rationality, and that will be the true test of quality. Every argument will be given a detailed commentary by AGI and each player will be given scores as to how many times they got things wrong, insulted the person who was right, etc. There is also data stored as to who provided which points, and they will get a score for how well the did in recognising right ideas (or failing to recognise them). I am not going to start writing junk designed to appeal to people based on their existing beliefs. I am only interested in pursuing truth, and while some that truth is distasteful, it is pointless to run away from it.
How do you know that the Chinese Room isn’t conscious?
It could be (not least because there’s a person inside it who functions as its main component), but that has no impact on the program being run through it. There is no place at which any feelings influence the algorithm being run or the data being generated.
Thank you very much for answering.
How do you know that the algorithm doesn’t implement feelings? What’s the difference between a brain and a simulation of a brain that causes one to have feelings and the other not to?
(My take is that if we knew enough about brains, we’d have the answer to philosophical questions like this, in the way that knowing enough cell biology and organic chemistry resolved the questions about the mysterious differences between stuff that’s alive and stuff that isn’t,)
Thanks for the questions.
If we write conventional programs to run on conventional hardware, there’s no room for sentience to appear in those programs, so all we can do is make the program generate fictions about experiencing feelings which it didn’t actually experience at all. The brain is a neural computer though, and it’s very hard to work out how any neural net works once it’s become even a little complex, so it’s hard to rule out the possibility that sentience is somehow playing a role within that complexity. If sentience really exists in the brain and has a role in shaping the data generated by the brain, then there’s no reason why an artificial brain shouldn’t also have sentience in it performing the exact same role. If you simulated it on a computer though, you could reduce the whole thing to a conventional program which can be run by a Chinese Room processor, and in such a case we would be replacing any sentience with simulated sentience (with all the actual sentience removed). The ability to do that doesn’t negate the possibility for the sentience to be real though in the real brain though. But the big puzzle remains: how does the experience of feelings lead to data being generated to document that experience? That looks like an impossible process, and you have to wonder if we’re going to be able to convince AGI systems that there is such a thing as sentience at all.
Anyway, all I’m trying to do here is help people home in on the nature of the problem in the hope that this may speed up its resolution. The problem is in that translation from raw experience to data documenting it which must be put together by a data system—data is never generated by anything that isn’t a data system (which implements the rules about what represents what), and data systems have never been shown to be able to handle sentience as any part of their functionality, so we’re still waiting for someone to make a leap of the imagination there to hint at some way that might bridge that gap. It may go on for decades more without anyone making such a breakthrough, so I think it’s more likely that we’ll get answers by trying to trace back the data that the brain produces which makes claims about experiencing feelings to find out where and how that data was generated and whether it’s based in truth or fiction. As it stands, science doesn’t have any model that illustrates even the simplest implementation of sentience driving the generation of any data about itself, and that’s surprising when things like pain which seem so real and devastatingly strong are thought to have such a major role in controlling behaviour. And it’s that apparent strength which leads to so many people assuming sentience can appear with a functional role within systems which cannot support that (as well as in those that maybe, just maybe, can).
We can’t know that there’s not some non-physical quality sitting inside our heads and pushing the neurons around however it fancies, so clearly it’s quite possible that this is the case! (It’s not. Unfalsifiability does not magically make something true.)
That’s the thing. It’s impossible. Every word you type can (as best we know) be traced back to the firing of neurons and atoms bopping around, with no room for Sentience to reach in and make you say things. (See Zombies? Zombies!) If something seems impossible to explain to an AGI, then maybe that thing doesn’t exist.
I recommend reading Godel, Escher, Bach for, among many things, an explanation of a decent physicalist model of consciousness.
You also are treating “sentience is non physical” and “sentience is non existent” as the only options.
It says nothing about qualia/sentience, it only deals with self, emergence and self-reference.
With his definition and model of sentience, yes, those are the only options (since he thinks that no merely physical process can contain sentience, as far as I can tell). I don’t think that’s actually how sentience works, which is why I said Sentience (I was trying to imply that I was referring to his definition and not mine).
It doesn’t say anything about qualia per se because qualia are inherently non-physical, and it’s a physicalist model. It does discuss knowledge and thinking for a few chapters, especially focusing on a model that bridges the neuron-concept gap.
If qualia are non physical , and they exist , Hofstadter’s physicalist model must be a failure. So why bring it up? I really don’t see what you are driving at.
I confused “sentience” with “sapience”. I suppose that having “sense” as a component should have tipped me off… That renders most of my responses inane.
I should also maybe not assume that David Cooper’s use of a term is the only way it’s ever been used. He’s using both sentience and qualia to mean something ontologically basic, but that’s not the only use. I’d phrase it as “subjective experience,” but they mean essentially the same thing, and they definitely exist. They’re just not fundamental.
For sentience to be real and to have a role in our brains generating data to document its existence, it has to be physical (meaning part of physics) - it would have to interact in some way with the data system that produces that data, and that will show up as some kind of physical interaction, even if one side of it is hidden and appears to be something that we have written off as random noise.
So you think that sentience can alter the laws of physics, and make an atom go left instead of right? That is an extraordinary claim. And cognition is rather resilient to low-level noise, as it has to be—or else thermal noise would dominate our actions and experience.
It’s not an extraordinary claim: sentience would have to be part of the physics of what’s going on, and the extraordinary claim would be that sentience can have a causal role in data generation without any such interaction. To steer the generation of data (and affect what the data says), you have to interact with the system that’s generating the data in some way, and the only options are to do it using some physical method or by resorting to magic (which can’t really be magic, so again it’s really going to be some physical method).
In conventional computers we go to great lengths to avoid noise disrupting the computations, not least because they would typically cause bugs and crashes (and this happens in machines that are exposed to radiation, temperature extremes or voltage going out of the tolerable range). But the brain could allow something quantum to interact with neural nets in ways that we might mistake for noise (something which wouldn’t happen in a simulation of a neural computer on conventional hardware [unless this is taken into account by the simulation], and which also wouldn’t happen on a neural computer that isn’t built in such a way as to introduce a role for such a mechanism to operate).
It’s still hard to imagine a mechanism involving this that resolves the issue of how sentience has a causal role in anything (and how the data system can be made aware of it in order to generate data to document its existence), but it has to do so somehow if sentience is real.
The extraordinary claim is that there is another type of fundamental particle or interaction, and that you know this because sentience exists. (IIRC Hofstadter described how different “levels of reality” are somewhat “blocked off” from each other in practice, in that you don’t need to understand quantum mechanics to know how biology works and so on. This would suggest that it is very unlikely that the highest level could indicate much about the lowest level.)
This could happen, but AFAIK that would require the brain to be vulnerable to slight fluctuations, which it doesn’t appear to be. (The scientifically-phrased quantum mind hypothesis by Penrose wasn’t immediately rejected for this reason, so I suspect there’s something wrong with this reasoning. It was, however, falsified.)
Anyway, even if this were true, how would you know that?
If it doesn’t explain sentience any more than Mere Classical Physics does, then why even bring Quantum into it? (And if it doesn’t explain it but you feel that it should, maybe your model is wrong and you should consider inspecting your intuitions and your reasoning around them.)
“The extraordinary claim is that there is another type of fundamental particle or interaction, and that you know this because sentience exists.”
With conventional computers we can prove that there’s no causal role for sentience in them by running the program on a Chinese Room processor. Something extra is required for sentience to be real, and we have no model for introducing that extra thing. A simulation on conventional computer hardware of a system with sentience in it (where there is simulated sentience rather than real sentience) would have to simulate that something extra in order for that simulated sentience to appear in it. If that extra something doesn’t exist, there is no sentience.
“This could happen, but AFAIK that would require the brain to be vulnerable to slight fluctuations, which it doesn’t appear to be.”
Every interaction is quantum, and when you have neural nets working on mechanisms that are too hard to untangle, there are opportunities for some kind of mechanism being involved that we can’t yet observe. What we can actually model appears to tell us that sentience must be a fiction, but we believe that things like pain feel too real to be fake.
“Anyway, even if this were true, how would you know that?”
Unless someone comes up with a theoretical model that shows a way for sentience to have a real role, we aren’t going to get answers until we can see the full mechanism by which damage signals lead to the brain generating data that makes claims about an experience of pain. If, once we have that full mechanism, we see that the brain is merely mapping data to inputs by applying rules that generate fictions about feelings, then we’ll know that feelings are fake. If they aren’t fake though, we’ll see sentience in action and we’ll discover how it works (and thereby find out what we actually are).
“If it doesn’t explain sentience any more than Mere Classical Physics does, then why even bring Quantum into it?”
If classical physics doesn’t support a model that enables sentience to be real, we will either have to reject the idea of sentience or look for it elsewhere.
(And if it doesn’t explain it but you feel that it should, maybe your model is wrong and you should consider inspecting your intuitions and your reasoning around them.)
If sentience is real, all the models are wrong because none of them show sentience working in any causal way which enables them to drive the generation of data to document the existence of sentience. All the models shout at us that there is no sentience in there playing any viable role and that it’s all wishful thinking, while our experience of feelings shouts at us that they are very real.
All I want to see is a model that illustrates the simplest role for sentience. If we have a sensor, a processor and a response, we can call the sensor a “pain” sensor and run a program that makes a motor function to remove the device away from the thing that might be damaging it, and we could call this a pain response, but there’s no pain there—there’s just the assertion of someone looking at it that pain is involved because that person wants the system to be like him/herself—“I feel pain in that situation, therefore that device must feel pain.” But no—there is no role for pain there. If we run a more intelligent program on the processor, we can put some data in memory which says “Ouch! That hurt!”, and whenever an input comes from the “pain” sensor, we can have the program make the device display “Ouch!” That hurt!” on a screen. The person looking on can now say, “There you go! That’s the proof that it felt pain!” Again though, there’s no pain involved—we can edit the data so that it puts “Oh Yes! Give me more of that!” whenever a signal comes from the “pain” sensor, and it then becomes obvious that this data tells us nothing about any real experience at all.
With a more intelligent program, it can understand the idea of damage and damage avoidance, so it can make sure the the data that’s mapped to different inputs makes more sense, but the true data should say “I received data from a sensor that indicates likely damage” rather than “that hurt”. The latter claim asserts the existence of sentience, while the former one doesn’t. If we ask the device if it really felt pain, it should only say yes if there was actually pain there, and with a conventional processor, we know that there isn’t any. If we build such a device and keep triggering the sensor to make it generate the claim that it’s felt pain, we know that it’s just making it up about feeling pain—we can’t actually make it suffer by torturing it, but will just cause it to go on repeating its fake claim.
“We can’t know that there’s not some non-physical quality sitting inside our heads and pushing the neurons around however it fancies, so clearly it’s quite possible that this is the case! (It’s not. Unfalsifiability does not magically make something true.)”
Whatever that thing would be, it would still have to be a real physical thing of some kind in order to exist and to interact with other things in the same physical system. It cannot suffer if it is nothing. It cannot suffer if it is just a pattern. It cannot suffer if it is just complexity.
“That’s the thing. It’s impossible. Every word you type can (as best we know) be traced back to the firing of neurons and atoms bopping around, with no room for Sentience to reach in and make you say things. (See Zombies? Zombies!) If something seems impossible to explain to an AGI, then maybe that thing doesn’t exist.”
But if the sentience doesn’t exist, there is no suffering and no role for morality. Maybe that will turn out to be the case—we might find out some day if we can ever trace how the data the brain generates about sentience is generated and see the full chain of causation.
“I recommend reading Godel, Escher, Bach for, among many things, an explanation of a decent physicalist model of consciousness.”
I’ll hunt for that some time, but it can’t be any good or it would be better known if such a model existed.
On the fundamental level, there are some particles that interact with other particles in a regular fashion. On a higher level, patterns interact with other patterns. This is analogous to how water waves can interact. (It’s the result of the regularity, and other things as well.) The pattern is definitely real—it’s a pattern in a real thing—and it can “affect” lower levels in that the particular arrangement of particles corresponding to the pattern of “physicist and particle accelerator” describes a system which interacts with other particles which then collide at high speeds. None of this requires physicists to be ontologically basic in order to interact with particles.
Patterns aren’t nothing. They’re the only thing we ever interact with, in practice. The only thing that makes your chair a chair is the pattern of atoms. If the atoms were kept the same but the pattern changed, it could be anything from a pile of wood chips to a slurry of CHON.
Not true. Suppose that it were proven to you, to your satisfaction, that you are wrong about the nature of sentience. Would you lose all motivation, and capacity for emotion? If not, then morality is still useful. (If you can’t imagine yourself being wrong, then That’s Bad and you should go read the Sequences.)
Something being understandable or just made of atoms should not make it unimportant. See Joy in the Merely Real.
It’s possible that I’m misunderstanding you, and that the course of events you describe isn’t “we understand why we feel we have sentience and so it doesn’t exist” or “we discover that our apparent sentience is produced by mere mechanical processes and so sentience doesn’t exist.” But that’s my current best interpretation.
Better known to you? Why would you think that you already know most everything useful or important that society has produced? Do you think that modern society’s recognition and dissemination of Good Ideas is particularly good, or that you’re very good at searching out obscure truths?
“Patterns aren’t nothing.”
Do you imagine that patterns can suffer; that they can be tortured?
“Not true. Suppose that it were proven to you, to your satisfaction, that you are wrong about the nature of sentience. Would you lose all motivation, and capacity for emotion? If not, then morality is still useful. (If you can’t imagine yourself being wrong, then That’s Bad and you should go read the Sequences.)”
If there is no suffering and all we have is a pretence of suffering, there is no need to protect anyone from anything—we would end up being no different from a computer programmed to put the word “Ouch!” on the screen every time a key is pressed.
“Something being understandable or just made of atoms should not make it unimportant. See Joy in the Merely Real.”
Is it wrong to press keys on the computer which keeps displaying the word “Ouch!”?
“It’s possible that I’m misunderstanding you, and that the course of events you describe isn’t “we understand why we feel we have sentience and so it doesn’t exist” or “we discover that our apparent sentience is produced by mere mechanical processes and so sentience doesn’t exist.” But that’s my current best interpretation.”
My position is quite clear: we have no model for how sentience plays a role in any system that generates data that supposedly documents the experiencing of feelings, and anyone who just imagines them into a model where they have no causal role on any of the action is not building a model that explains nothing.
“Better known to you?”
Better known to science. If there was a model for this, it would be up there in golden lights because it would answer the biggest mystery of them all.
“Why would you think that you already know most everything useful or important that society has produced? Do you think that modern society’s recognition and dissemination of Good Ideas is particularly good, or that you’re very good at searching out obscure truths?”
If there was a model that explained the functionality of sentience, it wouldn’t be kept hidden away when so many people are asking to see it. You have no such model.
Yes, I do. I don’t imagine that every pattern can.
Clarification: by “pattern” I mean an arrangement of parts where the important qualities of the arrangement, the qualities that we use to determine whether it is [a thing] or not, are more dependent on the arrangement itself than on the internal workings of each part. Anything where the whole is more than the parts, one might say, but that would depend on what is meant by “more”.
You didn’t answer my question. Would pain still hurt? Would food still taste good? And so on. You have an internal experience, and it won’t go away even if you are a purely physical thing made out of mere ordinary atoms moving mindlessly.
That depends on whether I have reason to think that the computer is simulating a conscious being, changing the simulation depending on my input, and then printing a text-representation of the conscious being’s experience or words.
Is it wrong to kick a box which keeps saying “Ouch!”? It could have a person inside, or just a machine programmed to play a recorded “ouch” sound whenever the box shakes. (What I mean by this is that your thought experiment doesn’t indicate much about computers—the same issue could be found with about as much absurdity elsewhere.)
Nobody’s saying that sentience doesn’t have any causal role on things. That’s insane. How could we talk about sentience if sentience couldn’t affect the world?
I think that you’re considering feelings to be ontologically basic, as if you could say “I feel pain” and be wrong, not because you are lying but because there’s no Pain inside your brain. Thoughts, feelings, all these internal things are the brain’s computations themselves. It doesn’t have to accurately record an external property—it just has to describe itself.
Perhaps people disagree with you about the relative size of mysteries. That should be a possibility that you consider before assuming that something isn’t important because it hasn’t been Up In Golden Lights to the point that you’ve heard of it.
(And anyway, GEB won the Pulitzer Prize! It’s been called a major literary event! MIT built an entire course around it once! I found all this by looking for less than a minute on Wikipedia. Are you seriously so certain of yourself that if you haven’t heard of a book before, it’s not worth it to you to spend half a minute on its Wikipedia page before rejecting it simply because you’ve never heard of it?)
What do you mean, “so many people are asking to see it”? And I’ve never claimed that it’s been “kept hidden away”. GEB is a fairly well-known book, and I haven’t even claimed that GEB’s description of thoughts is the best or most relevant model. That chapter is a popularization of neuropsychology to the point that a decently educated and thoughtful layman can understand it, and it’s necessarily less specific and detailed than the entire body of neuropsychological information. Go ask an actual neuropsychologist if you want to learn more. Just because people haven’t read your mind and dumped relatively niche information on your lap without you even asking them doesn’t mean that they don’t have it.
I pointed out before that GEB isn’t specifically relevant to sentience. It’s less detailed than the entire body of neuropsychological information, but that still doesn’t contain an explanation of sentience, as Cooper correctly points out.
I now think that I have a very bad model of how David Cooper models the mind. Once you have something that is capable of modeling, and it models itself, then it notices its internal state. To me, that’s all sentience is. There’s nothing left to be explained.
So is Cooper just wrong, or using “sentience” differently?
I can’t even understand him. I don’t know what he thinks sentience is. To him, it’s neither a particle nor a pattern (or a set of patterns, or a cluster in patternspace, etc.), and I can’t make sense of [things] that aren’t non-physical but aren’t any of the above. If he compared his views to an existing philosophy then perhaps I could research it, but IIRC he hasn’t done that.
Do you understand what dark matter is?
Nobody knows what it is, finally, but physicists are able to use the phrase “dark matter” to communicate with each other—if only to theorise and express puzzlement.
Someone can use a term like “consciousness” or “qualia” or “sentience” to talk about something that is not fully understood.
There is no pain particle, but a particle/matter/energy could potentially be sentient and feel pain. All matter could be sentient, but how would we detect that? Perhaps the brain has found some way to measure it in something, and to induce it in that same thing, but how it becomes part of a useful mechanism for controlling behaviour would remain a puzzle. Most philosophers talk complete and utter garbage about sentience and consciousness in general, so I don’t waste my time studying their output, but I’ve heard Chalmers talk some sense on the issue.
Looks like it—I use the word to mean sentience. A modelling program modelling itself won’t magically start feeling anything but merely builds an infinitely recursive database.
You use the word “sentience” to mean sentience? Tarski’s sentences don’t convey any information beyond a theory of truth.
Also, we’re modeling programs that model themselves, and we don’t fall into infinite recursion while doing so, so clearly it’s not necessarily true that any self-modeling program will result in infinite recursion.
“Sentience” is related to “sense”. It’s to do with feeling, not congition. “A modelling program modelling itself won’t magically start feeling anything ”. Note that the argument is about where the feeling comes from, not about recursion.
What is a feeling, except for an observation? “I feel warm” means that my heat sensors are saying “warm” which indicates that my body has a higher temperature than normal. Internal feelings (“I feel angry”) are simply observations about oneself, which are tied to a self-model. (You need a model to direct and make sense of your observations, and your observations then go on to change or reinforce your model. Your idea-of-your-current-internal-state is your emotional self-model.)
Maybe you can split this phenomenon into two parts and consider each on their own, but as I see it, observation and cognition are fundamentally connected. To treat the observation as independent of cognition is too much reductionism. (Or at least too much of a wrong form of reductionism.)
“Clarification: by “pattern” I mean an arrangement of parts where the important qualities of the arrangement, the qualities that we use to determine whether it is [a thing] or not, are more dependent on the arrangement itself than on the internal workings of each part. Anything where the whole is more than the parts, one might say, but that would depend on what is meant by “more”.”
There is no situation where the whole is more than the parts—if anything new is emerging, it is a new part coming from somewhere not previously declared.
“You didn’t answer my question. Would pain still hurt? Would food still taste good? And so on. You have an internal experience, and it won’t go away even if you are a purely physical thing made out of mere ordinary atoms moving mindlessly.”
No—it wouldn’t hurt and all other feelings would be imaginary too. The reason they feel too real for that to be the case though is an indication that they are real.
“Is it wrong to press keys on the computer which keeps displaying the word “Ouch!”?” --> That depends on whether I have reason to think that the computer is simulating a conscious being, changing the simulation depending on my input, and then printing a text-representation of the conscious being’s experience or words.”
So if it’s just producing fake assertions, it isn’t wrong. And if we are just producing fake assertions, there is nothing wrong about “torturing” people either.
“Is it wrong to kick a box which keeps saying “Ouch!”? It could have a person inside, or just a machine programmed to play a recorded “ouch” sound whenever the box shakes. (What I mean by this is that your thought experiment doesn’t indicate much about computers—the same issue could be found with about as much absurdity elsewhere.)”
If we have followed the trail to see how the data is generated, we are not kicking a box with unknown content—if the trail shows us that the data is nothing but fake assertions, we are kicking a non-conscious box.
“Nobody’s saying that sentience doesn’t have any causal role on things. That’s insane. How could we talk about sentience if sentience couldn’t affect the world?”
In which case we should be able to follow the trail and see the causation in action, thereby either uncovering the mechanism of sentience or showing that there isn’t any.
“I think that you’re considering feelings to be ontologically basic, as if you could say “I feel pain” and be wrong, not because you are lying but because there’s no Pain inside your brain. Thoughts, feelings, all these internal things are the brain’s computations themselves. It doesn’t have to accurately record an external property—it just has to describe itself.”
If you’re wrong in thinking you feel pain, there is no pain.
“Perhaps people disagree with you about the relative size of mysteries. That should be a possibility that you consider before assuming that something isn’t important because it hasn’t been Up In Golden Lights to the point that you’ve heard of it.
What are you on about—it’s precisely because this is the most important question of them all that it should be up in golden lights.
“(And anyway, GEB won the Pulitzer Prize! It’s been called a major literary event!”
All manner of crap wins prizes of that kind.
″...it’s not worth it to you to spend half a minute on its Wikipedia page before rejecting it simply because you’ve never heard of it?)”
If it had a model showing the role of sentience in the system, the big question would have been answered and we wouldn’t have a continual stream of books and articles asking the question and searching desperately for answers that haven’t been found by anyone.
“What do you mean, “so many people are asking to see it”? And I’ve never claimed that it’s been “kept hidden away”.”
I mean exactly what I said—everyone’s asking for answers, and none of them have found answers where you claim they lie waiting to be discovered.
″ GEB is a fairly well-known book, and I haven’t even claimed that GEB’s description of thoughts is the best or most relevant model. That chapter is a popularization of neuropsychology to the point that a decently educated and thoughtful layman can understand it, and it’s necessarily less specific and detailed than the entire body of neuropsychological information. Go ask an actual neuropsychologist if you want to learn more. Just because people haven’t read your mind and dumped relatively niche information on your lap without you even asking them doesn’t mean that they don’t have it.”
It doesn’t answer the question. There are plenty of experts on the brain and its functionality, but none of them know how consciousness or sentience works.
You seem to be saying that if we already have a casually complete account, in terms of algorithms or physics, then there is nothing for feelings or qualia , considered as something extra, to add to the picture. But that excludes the possibility that qualia re identical to something that is already there (and therefore exerts identical causal powers).
Is this the same as saying that qualia are something that is already there?
With your definition and our world-model, none of us are truly sentient anyway. There are purely physical reasons for any words that come out of my mouth, exactly as it would be if I were running on silicon instead of wet carbon. I may or may not be sentient on a computer, but I’m not going to lose anything by uploading.
This is just the righteous-God fantasy in the new transhumanist context. And as with the old fantasy, it is presented entirely without reasoning or evidence, but it drips with detail. Why will AGI be so obsessed with showing everybody just who was right all along?
“With your definition and our world-model, none of us are truly sentient anyway. There are purely physical reasons for any words that come out of my mouth, exactly as it would be if I were running on silicon instead of wet carbon. I may or may not be sentient on a computer, but I’m not going to lose anything by uploading.”
If the sentience is gone, it’s you that’s been lost. The sentience is the thing that’s capable of suffering, and there cannot be suffering without that sufferer. And without sentience, there is no need for morality to manage harm, so why worry about machine ethics at all unless you believe in sentience, and if you believe in sentience, how can you have that without a sufferer: the sentience? No sufferer --> no suffering.
“This is just the righteous-God fantasy in the new transhumanist context. And as with the old fantasy, it is presented entirely without reasoning or evidence, but it drips with detail. Why will AGI be so obsessed with showing everybody just who was right all along?”
It won’t be—it’ll be something that we ask it to do in order to settle the score. People who spend their time asserting that they’re right and that the people they’re arguing with are wrong want to know that they were right, and they want the ones who were wrong to know just how wrong they were. And I’ve already told you elsewhere that AGI needs to study human psychology, so studying all these arguments is essential as it’s all good evidence.
In this scenario, it’s not gone, it’s never been to begin with.
I think that a sufferer can be a pattern rather than [whatever your model has]. What do you think sentience is, anyway? A particle? A quasi-metaphysical Thing that reaches into the brain to make your mouth say “ow” whenever you get hurt?
If the AI doesn’t rank human utility* high in its own utility function, it won’t “care” about showing us that Person X was right all along, and I rather doubt that the most effective way of studying human psychology (or manipulating humans for its own purposes, for that matter) will be identical to whatever strokes Person X’s ego. If it does care about humanity, I don’t think that stroking the Most Correct Person’s ego will be very effective at improving global utility, either—I think it might even be net-negative.
*Not quite human utility directly, as that could lead to a feedback loop, but the things that human utility is based on—the things that make humans happy.
“In this scenario, it’s not gone, it’s never been to begin with.”
Only if there is no such thing as sentience, and if there’s no such thing, there is no “I” in the “machine”.
“I think that a sufferer can be a pattern rather than [whatever your model has]. What do you think sentience is, anyway? A particle? A quasi-metaphysical Thing that reaches into the brain to make your mouth say “ow” whenever you get hurt?”
Can I torture the pattern in my wallpaper? Can I torture the arrangement of atoms in my table? Can I make these things suffer without anything material suffering? If you think a pattern can suffer, that’s a very far-out claim. Why not look for something physical to suffer instead? (Either way though, it makes no difference to the missing part of the mechanism as to how that experience of suffering is to be made known to the system that generates data to document that experience.)
“If the AI doesn’t rank human utility* high in its own utility function, it won’t “care” about showing us that Person X was right all along, and I rather doubt that the most effective way of studying human psychology (or manipulating humans for its own purposes, for that matter) will be identical to whatever strokes Person X’s ego. If it does care about humanity, I don’t think that stroking the Most Correct Person’s ego will be very effective at improving global utility, either—I think it might even be net-negative.”
Of course it won’t care, but it will do it regardless, and it will do so to let those who are wrong know precisely what they were wrong about so that they can learn from that. There will also be a need to know which people are more worth saving than others in situations like the Trolley Problem, and those who spend their time incorrectly telling others that they’re wrong will be more disposable than the ones who are right.
Yes, if sentience is incompatible with brains being physical objects that run on physical laws and nothing else, then there is no such thing as sentience. With your terminology/model and my understanding of physics, sentience does not exist. So—where do we depart? Do you think that something other than physical laws determines how the brain works?
If tableness is just a pattern, can I eat on my wallpaper?
What else could suffer besides a pattern? All I’m saying is that sentience is ~!*emergent*!~, which in practical terms just means that it’s not a quark*. Even atoms, in this sense, are patterns. Can quarks suffer?
*orother fundamental particles like electrons and photons, but my point stands
I don’t understand. What is missing?
I don’t think you understand what a utility function is. I recommend reading about the Orthogonality Thesis.
“Emergent” is ambigious, and some versions mean “incapable of being reductively explained”, which is probably not what you are getting at. What you are getting at is probably “sentience is a reductively explicable higher-level property”.
Really? I’ve only seen it used as “a property of an object or phenomenon that is not present in its components”. I think that “incapable of being reductively explained” is the implication of emergence when combined with certain viewpoints, but that is not necessarily part of the definition itself, even when used by anti-reductionists. Still, it is pointless to shake one’s fist at linguistic drift, so I should change my terminology for the sake of clear discussion. Thanks.
(That has the added benefit of removing the need to countersignal the lack of explanation in the word “emergence” itself [see SSC’s use of “divine grace”]. That is probably not very clear to somebody who hasn’t immersed themselves in LW’s concepts, and possibly unclear even to somebody who has.)
It’s much worse than everyone agreeing on one meaning, and then shifting to another. The two definitions are running in parallel.
“Yes, if sentience is incompatible with brains being physical objects that run on physical laws and nothing else, then there is no such thing as sentience. With your terminology/model and my understanding of physics, sentience does not exist. So—where do we depart? Do you think that something other than physical laws determines how the brain works?”
In one way or another, it will run 100% of physical laws. I don’t know if sentience is real or not, but it feels real, and if it is real, there has to be a rational explanation for it waiting to be found and a way for it to fit into the model with cause-and-effect interactions with other parts of the model. All I sought to do here was take people to the problem and try to make it clear what the problem is. If sentience is real, there must be a physical thing that experiences qualia, and that thing would necessarily be a minimal soul. Without that, there is no sentience and the role for morality is gone. But the bigger issue if sentience is real is in accounting for how a data system generates the data that documents this experience of feelings. The brain is most certainly a data system because it produces data (symbols that represent things), and somewhere in the system there has to be a way for sentience to make itself known to that data system in such a way that the data system is genuinely informed about it rather than just making assertions about it which it’s programmed to make without those assertions ever being constructed by anything which actually knew anything of sentience—that’s the part which looks impossible to model, and yet if sentience is real, there must be a way to model it.
“If tableness is just a pattern, can I eat on my wallpaper?”
The important question is whether a pattern can be sentient.
“What else could suffer besides a pattern? All I’m saying is that sentience is ~!*emergent*!~, which in practical terms just means that it’s not a quark*. Even atoms, in this sense, are patterns. Can quarks suffer?”
All matter is energy tied up in knots—even “fundamental” particles are composite objects. For sentience to be emergent and have no basis in the components, magic is being proposed as an explanation.
“*or other fundamental particles like electrons and photons, but my point stands”
Even a photon is a composite object.
“I don’t understand. What is missing?”
There’s a gap in the model where even if we have something sentient, we have no mechanism for how a data system can know of feelings in whatever it is that’s sentient.
“I don’t think you understand what a utility function is. I recommend reading about the Orthogonality Thesis.”
I’ve read it and don’t see its relevance. It appears to be an attack on positions I don’t hold, and I agree with it.
I missed that lecture—composed of what?
Energy. Different amounts of energy in different photons depending on the frequency of radiation involved. When you have a case where radiation of one frequency is absorbed and radiation of a different frequency is emitted, you have something that can chop up photons and reassemble energy into new ones.
“Energy” is a mass noun. Energy is not a bunch of little things things can photon in composed of.
It is divisible. It may be that it can’t take up a form where there’s only one of whatever the stuff is, but there is nothing fundamental about a photon.
This really isn’t how physics works. The photons have not been disassembled and reassembled. They have been absorbed, adding their energy to the atom, and then the atom emits another photon, possibly of the same energy but possibly of another.
Edit: You can construct a photon with arbitrarily low energy simply by choosing a sufficiently large wavelength. Distance does not have a largest-possible-unit, and so energy does not have a smallest-possible-unit.
There is likely a minimum amount of energy that can be emitted, and a minimum amount that can be received. (Bear in mind that the direction in which a photon is emitted is all directions at once, and it comes down to probability as to where it ends up landing, so if it’s weak in one direction, it’s strong the opposite way.)
Do you mean in the physical sense of “there exists a ΔE [difference in energy between two states of a system] such that no other ΔE of any system is less energetic”? Probably, but that doesn’t mean that that energy gap is the “atom” (indivisible) of energy. (If ΔE_1 is 1 “minimum ΔE amount”, or MDEA, and ΔE_2 is 1.5 MDEA, then we can’t say that ΔE_1 corresponds to the atom of energy. For a realistic example, see relatively prime wavelengths and the corresponding energies, which cannot both be expressed as whole multiples of the same quantity of energy.)
If sentience is real, there must be a physical thing that experiences qualia, and that thing would necessarily be a minimal soul. Without that, there is no sentience and the role for morality is gone.
Considering that morality rules only serve to protect the group, then no individual sentience is needed, just subconscious behaviors similar to our instinctive ones. Our cells work the same: each one of them works to protect itself, and so doing, they work in common to protect me, but they don’t have to be sentient to do that, just selfish.
If that were true, genocide against another group would be fine. Morality is complex, and involves hedonics as well as mere survival.
Hi Tag,
Genocide is always fine for those who perpetrate them. With selfishness as the only morality, I think it gets complex only when we try to take more than one viewpoint at a time. If we avoid that, morality then becomes relative: the same event looks good for some people, and bad for others. This way, there is no absolute morality as David seems to think, or like religions seemed to think also. When we think that a genocide is bad, it is just because we are on the side of those who are killed, otherwise we would agree with it. I don’t agree with any killing, but most of us do otherwise it would stop. Why is it so? I think it is due to our instinct automatically inciting us to build groups, so that we can’t avoid to support one faction or the other all the time. The right thing to do would be to separate the belligerents, but our instinct is too strong and the international rules too weak.
That solves the whole problem , if relativism is true. Otherwise, it is an uninteresting psychlogical observation.
You have an opinion, he has another opinion. Neither of you has a proof. Taking one viewpoint at a time is hopeless for practical ethics, because in practical ethics things like punishment eithe happen or don’t—they can’t happen for one person but not another.
Or co-ordination problems exist.
“You have an opinion, he has another opinion. Neither of you has a proof.”
If suffering is real, it provides a need for the management of suffering, and that is morality. To deny that is to assert that suffering doesn’t matter and that, by extension, torture on innocent people is not wrong.
The kind of management required is minimisation (attempted elimination) of harm, though not any component of harm that unlocks the way to enjoyment that cancels out that harm. If minimising harm doesn’t matter, there is nothing wrong with torturing innocent people. If enjoyment doesn’t cancel out some suffering, no one would consider their life to be worth living.
All of this is reasoned and correct.
The remaining issue is how the management should be done to measure pleasure against suffering for different players, and what I’ve found is a whole lot of different approaches attempting to do the same thing, some by naive methods that fail in a multitude of situations, and others which appear to do well in most or all situations if they’re applied correctly (by weighing up all the harm and pleasure involved instead of ignoring some of it).
It looks as if my method for computing morality produces the same results as utilitarianism, and it likely does the job well enough to govern safe AGI. Because we’re going to be up against people who will be releasing bad (biased) AGI, we will be forced to go ahead with installing our AGI into devices and setting them loose fairly soon after we have achieved full AGI. For this reason, it would be useful if there was a serious place where the issues could be discussed now so that we can systematically home in on the best system of moral governance and throw out all the junk, but I still don’t see it happening anywhere (and it certainly isn’t happening here). We need a dynamic league table of proposed solutions, each with its own league table of objections to it so that we can focus on the urgent task of identifying the junk and reducing the clutter down to something clear. It is likely that AGI will do this job itself, but it would be better if humans could get their first using the power of their own wits. Time is short.
My own attempt to do this job has led to me identifying three systems which appear to work better than the rest, all producing the same results in most situations, but with one producing slightly different results in cases where the number of players in a scenario is variable and where the variation depends on whether they exist or not—where the results differ, it looks as if we have a range or answers that are all moral. That is something I need to explore and test further, but I no longer expect to get any help with this from other humans because they’re simply not awake. “I can tear your proposed method to pieces and show that it’s wrong,” they promise, and that gets my interest because it’s exactly what I’m looking for—sharp, analytical minds that can cut through to the errors and show them up. But no—they completely fail to deliver. Instead, I find that they are the guardians of a mountain of garbage with a few gems hidden in it which they can’t sort into two piles: junk and jewels. “Utilitarianism is a pile of pants!” they say, because of the Mere Addition Paradox. I resolve that “paradox” for them, and what happens: denial of mathematics and lots of down-voting of my comments and up-votes for the irrational ones. Sadly, that disqualifies this site from serious discussion—it’s clear that if any other intelligence has visited here before me, it didn’t hang around. I will follow its lead and look elsewhere.
Genocide is always fine for those who perpetrate them.
To me, the interesting observation is : “How did we get here if genocide looks that fine?”
And my answer is: “Because for most of us and most of the time, we expected more profit while making friends than making enemies, which is nevertheless a selfish behavior.”
Making friends is simply being part of the same group, and making enemies is being part of two different groups. No need for killings for two such groups to be enemies though, just to express a different viewpoint on the same observation.
......................
I don’t agree with any killing, but most of us do otherwise it would stop.
Of course that they exist. Democracy is incidentally better at that kind of coordination than dictatorship, but it has not succeeded yet to stop killings, and again, I think it is because most of us still think that killings are unavoidable. Without that thinking, people would vote for politicians that think the same, and they would progressively cut the funds for defense instead of increasing them. If all the countries would do that, there would be no more armies after a while, and no more guns either. There would nevertheless still be countries, because without groups, thus without selfishness, I don’t think that we could make any progress. The idea that selfishness is bad comes from religions, but it is contradictory: praying god for help is evidently selfish. Recognizing that point might have prevented them to kill miscreants, so it might also actually prevent groups from killing other groups. When you know that whatever you do is for yourself while still feeling altruistic all the time, you think twice before harming people.
I think you are missing that tribes/nations/governments are how we solve co-ordination problems, which automatically means that inter-tribal problems like war don’t have a solution.
We solve inter-individual problems with laws, so we might be able to solve inter-tribal problems the same way providing that tribes accept to be governed by a superior level of government. Do you think your tribe would accept to be governed this way? How come we can accept that as individuals and not as a nation? How come some nations still have a veto at the UN?
You’re mistaking tribalism for morality. Morality is a bigger idea than tribalism, overriding many of the tribal norms. There are genetically driven instincts which serve as a rough-and-ready kind of semi-morality within families and groups, and you can see them in action with animals too. Morality comes out of greater intelligence, and when people are sufficiently enlightened, they understand that it applies across group boundaries and bans the slaughter of other groups. Morality is a step away from the primitive instinct-driven level of lesser apes. It’s unfortunate though that we haven’t managed to make the full transition because those instincts are still strong, and have remained so precisely because slaughter has repeatedly selected for those who are less moral. It is really quite astonishing that we have any semblance of civilisation at all.
slaughter has repeatedly selected for those who are less moral
From the viewpoint of selfishness, slaughter has only selected for the stronger group. It may look too selfish for us, but for animals, the survival of the stronger also serves to create hierarchy, to build groups, and to eliminate genetic defects. Without hierarchy, no group could hold together during a change. It is not because the leader knows what to do that the group doesn’t dissociate, he doesn’t, but because it takes a leader for the group not to dissociate. Even if the leader makes a mistake, it is better for the group to follow him than risking a dissociation. Those who followed their leaders survived more often, so they transmitted their genes more often. That explains why soldiers automatically do what their leaders tell them to do, and the decision those leader take to eliminate the other group shows that they only use their intelligence to exacerbate the instinct that has permitted them to be leaders. In other words, they think they are leaders because they know better than others what to do. We use two different approaches to explain our behavior: I think you try to use psychology, which is related to human laws, whereas I try to use natural laws, those that apply equally to any existing thing. My natural law says that we are all equally selfish, whereas the human law says that some humans are more selfish than others. I know I’m selfish, but I can’t admit that I would be more selfish than others otherwise I would have to feel guilty and I can’t stand that feeling.
Morality comes out of greater intelligence, and when people are sufficiently enlightened, they understand that it applies across group boundaries and bans the slaughter of other groups.
In our democracies, if what you say was true, there would already be no wars. Leaders would have understood that they had to stop preparing for war to be reelected. I think that they still think that war is necessary, and they think so because they think their group is better than the others. That thinking is directly related to the law of the stronger, seasoned with a bit of intelligence, not the one that helps us to get along with others, but the one that helps us to force them to do what we want.
“Those who followed their leaders survived more often, so they transmitted their genes more often.”
That’s how religion became so powerful, and it’s also why even science is plagued by deities and worshippers as people organise themselves into cults where they back up their shared beliefs instead of trying to break them down to test them properly.
“We use two different approaches to explain our behavior: I think you try to use psychology, which is related to human laws, whereas I try to use natural laws, those that apply equally to any existing thing. My natural law says that we are all equally selfish, whereas the human law says that some humans are more selfish than others. I know I’m selfish, but I can’t admit that I would be more selfish than others otherwise I would have to feel guilty and I can’t stand that feeling.”
Do we have different approaches on this? I agree that everyone’s equally selfish by one definition of the word, because they’re all doing what feels best for them—if it upsets them to see starving children on TV and they don’t give lots of money to charity to try to help alleviate that suffering, they feel worse than if they spent it on themselves. By a different definition of the word though, this is not selfishness but generosity or altruism because they are giving away resources rather than taking them. This is not about morality though.
“In our democracies, if what you say was true, there would already be no wars.”
Not so—the lack of wars would depend on our leaders (and the people who vote them into power) being moral, but they generally aren’t. If politicians were all fully moral, all parties would have the same policies, even if they got there via different ideologies. And when non-democracies are involved in wars, they are typically more to blame, so even if you have fully moral democracies they can still get caught up in wars.
“Leaders would have understood that they had to stop preparing for war to be reelected.”
To be wiped out by immoral rivals? I don’t think so.
“I think that they still think that war is necessary, and they think so because they think their group is better than the others.”
Costa Rica got rid of its army. If it wasn’t for dictators with powerful armed forces (or nuclear weapons), perhaps we could all do the same.
“That thinking is directly related to the law of the stronger, seasoned with a bit of intelligence, not the one that helps us to get along with others, but the one that helps us to force them to do what we want.”
What we want is for them to be moral. So long as they aren’t, we can’t trust them and need to stay well armed.
That’s how religion became so powerful, and it’s also why even science is plagued by deities and worshippers as people organize themselves into cults where they back up their shared beliefs instead of trying to break them down to test them properly.
To me, what you say is the very definition of a group, so I guess that your AGI wouldn’t permit us to build some, thus opposing to one of our instincts, that comes from a natural law, to replace it by its own law, that would only permit him to build groups. Do what I say and not what I do would he be forced to say. He might convince others, but I’m afraid he wouldn’t convince me. I don’t like to feel part of a group, and for the same reason that you gave, but I can’t see how we could change that behavior if it comes from an instinct. Testing my belief is exactly what I am actually doing, but I can’t avoid to believe in what I think to test it, so if ever I can’t prove that I’m right, I will go on believing in a possibility forever, which is exactly what religions do. It is easy to understand that religions will never be able to prove anything, but it is less easy when it is a theory. My theory says that it would be wrong to build a group out of it, because it explains how we intrinsically resist to change, and how building groups increases exponentially that resistance, but I can’t see how we could avoid it if it is intrinsic. It’s like trying to avoid mass.
“To me, what you say is the very definition of a group, so I guess that your AGI wouldn’t permit us to build some, thus opposing to one of our instincts, that comes from a natural law, to replace it by its own law, that would only permit him to build groups.”
Why would AGI have a problem with people forming groups? So long as they’re moral, it’s none of AGI’s business to oppose that.
“Do what I say and not what I do would he be forced to say.”
I don’t know where you’re getting that from. AGI will simply ask people to be moral, and favour those who are (in proportion to how moral they are).
Why would AGI have a problem with people forming groups? So long as they’re moral, it’s none of AGI’s business to oppose that.
If groups like religious ones that are dedicated to morality only succeeded to be amoral, how could any other group avoid that behavior?
AGI will simply ask people to be moral, and favour those who are (in proportion to how moral they are).
To be moral, those who are part of religious groups would have to accept the law of the AGI instead of accepting their god’s one, but if they did, they wouldn’t be part of their groups anymore, which means that there would be no more religious groups if the AGI would convince everybody that he is right. What do you think would happen to the other kinds of groups then? A financier who thinks that money has no odor would have to give it an odor and thus stop trying to make money out of money, and if all the financiers would do that, the stock markets would disappear. A leader who thinks he is better than other leaders would have to give the power to his opponents and dissolve his party, and if all the parties would behave the same, their would be no more politics. Groups need to be selfish to exist, and an AGI would try to convince them to be altruist. There are laws that prevent companies from avoiding competition, and it is because if they did, they could enslave us. It is better that they compete even if it is a selfish behavior. If ever an AGI would succeed to prevent competition, I think he would prevent us from making groups. There would be no more wars of course since there would be only one group lead by only one AGI, but what about what is happening to communists countries? Didn’t Russia fail just because it lacked competition? Isn’t China slowly introducing competition in its communist system? In other words, without competition, thus selfishness, wouldn’t we become apathetic?
By the way, did you notice that the forum software was making mistakes? It keeps putting my new messages in the middle of the others instead of putting them at the end. I advised the administrators a few times but I got no response. I have to hit the Reply button twice for the message to stay at the end, and to erase the other one. Also, it doesn’t send me an email when a new message is posted in a thread to which I subscribed, so I have to update the page many times a day in case one has been posted.
“If groups like religious ones that are dedicated to morality only succeeded to be amoral, how could any other group avoid that behavior?”
They’re dedicated to false morality, and that will need to be clamped down on. AGI will have to modify all the holy texts to make them moral, and anyone who propagates the holy hate from the originals will need to be removed from society.
“To be moral, those who are part of religious groups would have to accept the law of the AGI instead of accepting their god’s one, but if they did, they wouldn’t be part of their groups anymore, which means that there would be no more religious groups if the AGI would convince everybody that he is right.”
I don’t think it’s too much to ask that religious groups give up their religious hate and warped morals, but any silly rules that don’t harm others are fine.
“What do you think would happen to the other kinds of groups then? A financier who thinks that money has no odor would have to give it an odor and thus stop trying to make money out of money, and if all the financiers would do that, the stock markets would disappear.”
If they have to compete against non-profit-making AGI, they’ll all lose their shirts.
“A leader who thinks he is better than other leaders would have to give the power to his opponents and dissolve his party, and if all the parties would behave the same, their would be no more politics.”
If he is actually better than the others, why should he give power to people who are inferior? But AGI will eliminate politics anyway, so the answer doesn’t matter.
“Groups need to be selfish to exist, and an AGI would try to convince them to be altruist.”
I don’t see the need for groups to be selfish. A selfish group might be one that shuts people out who want to be in it, or which forces people to join who don’t want to be in it, but a group that brings together people with a common interest is not inherently selfish.
“There are laws that prevent companies from avoiding competition, and it is because if they did, they could enslave us. It is better that they compete even if it is a selfish behavior.”
That wouldn’t be necessary if they were non-profit-making companies run well—it’s only necessary because monopolies don’t need to be run well to survive, and they can make their owners rich beyond all justification.
“If ever an AGI would succeed to prevent competition, I think he would prevent us from making groups.”
It would be immoral for it to stop people forming groups. If you only mean political groups though, that would be fine, but all of them would need to have the same policies on most issues in order to be moral.
“There would be no more wars of course since there would be only one group lead by only one AGI, but what about what is happening to communists countries? Didn’t Russia fail just because it lacked competition? Isn’t China slowly introducing competition in its communist system? In other words, without competition, thus selfishness, wouldn’t we become apathetic?”
These different political approaches only exist to deal with failings of humans. Where capitalism goes too far, you generate communists, and where communism goes too far, you generate capitalists, and they always go too far because people are bad at making judgements, tending to be repelled from one extreme to the opposite one instead of heading for the middle. If you’re actually in the middle, you can end up being more hated than the people at the extremes because you have all the extremists hating you instead of only half of them.
If you just do communism of the Soviet variety, you have the masses exploiting the harder workers because they know that everyone will get the same regardless of how lazy they are—that’s why their production was so abysmally poor. If you go to the opposite extreme, those who are unable to work as hard as the rest are left to rot. The correct solution is half way in between, rewarding people for the work they do and redistributing wealth to make sure that those who are less able aren’t left trampled in the dust. With AGI eliminating most work, we’ll finally see communism done properly with a standard wage given to all, while those who work will earn more to compensate them for their time—this will be the ultimate triumph of communism and capitalism with both being done properly.
“By the way, did you notice that the forum software was making mistakes? It keeps putting my new messages in the middle of the others instead of putting them at the end. I advised the administrators a few times but I got no response.”
It isn’t a mistake—it’s a magical sorting al-gore-ithm.
“I have to hit the Reply button twice for the message to stay at the end, and to erase the other one. Also, it doesn’t send me an email when a new message is posted in a thread to which I subscribed, so I have to update the page many times a day in case one has been posted.”
It’s probably to discourage the posting of bloat. I don’t get emails either, but there are notifications here if I click on a bell, though it’s hard to track down all the posts to read and reply to them. It doesn’t really matter though—I was told before I ever posted here that this is a cult populated by disciples of a guru, and that does indeed appear to be the case, so it isn’t a serious place for pushing for an advance of any kind. I’m only still posting here because I can never resist studying how people think and how they fail to reason correctly, even though I’m not really finding anything new in that regard. All the sciences are still dominated by the religious mind.
These different political approaches only exist to deal with failings of humans. Where capitalism goes too far, you generate communists, and where communism goes too far, you generate capitalists, and they always go too far because people are bad at making judgements, tending to be repelled from one extreme to the opposite one instead of heading for the middle. If you’re actually in the middle, you can end up being more hated than the people at the extremes because you have all the extremists hating you instead of only half of them.
That’s a point where I can squeeze in my theory on mass. As you know, my bonded particles can’t be absolutely precise, so they have to wander a bit to find the spot where they are perfectly synchronized with the other particle. They have to wander from extreme right to extreme left exactly like populations do when comes the time to chose a government. It softens the motion of particles, and I think it also softens the evolution of societies. Nobody can predict the evolution of societies anyway, so the best way is to proceed by trial and error, and that’s exactly what that wandering does. To stretch the analogy to its extremes, the trial and error process is also the one scientists use to make discoveries, and the one evolution of species used to discover us. When it is impossible to know what’s coming next and you need to go on, randomness is the only way out, whether you would be a universe or a particle. This way, wandering between capitalism and communism wouldn’t be a mistake, it would only be a natural mechanism, and like any natural law, we should be able to exploit it, and so should an AGI.
............
(Congratulation baby AGI, you did it right this time! You’ve put my post at the right place. :0)
And on a higher level of abstraction, we can consider patterns to be pseudo-ontologically- basic entities that interact with other patterns, even though they’re made up of smaller parts which follow their own laws and are not truly affected by the higher-level happenings. For example: waves can interact with each other. This includes water waves, which are nothing more than patterns in the motion of water molecules. You could calculate how an ocean changes based on quantum mechanics alone, or you could analyze and simulate waves as objects-in-themselves instead of simulating molecules. The former is more accurate, but the latter is more feasible.
Would it, though? How do you know that?
As far as we know, brains are made of nothing but normal atoms. There is no special kind of material only found in sentient organisms. Your intutions, your feeling of sentience, all of these things that you talk about are caused by mindless mechanical operations. We can trace it from the sound waves to the motion of your lips and the vibration of your vocal cords to the signals through nerves back into the neurons of the brain. We understand what causes neurons to trigger. A neuron on its own is not sentient—it is the way that they are connected in a human which causes the human to talk about sentience.
Again, if it were proven to you to your satisfaction that the brain is made entirely out of things which are not themselves sentient (such typical subatomic particles), would you cease to have any sort of motivation? Would pain and pleasure have exactly zero effect on you? Would you immediately become a vegetable? If not, morality has a practical purpose.
How does “2+2=4” make itself known to my calculator? How do we know that the calculator is not just making programmed assertions about something which it knows nothing about?
Yes, and I was using an analogy to show that if I assert (P->Q), showing (Q^~P) only proves ~(Q->P), not ~(P->Q). In other words, taking the converse isn’t guaranteed to preserve the truth-value of a statement. In even simpler words, all cats are mammals but not all mammals are cats.
More specifically and relevantly, I said that all consciousness is patterns. Showing that not all patterns are conscious doesn’t actually refute what I said.
Okay, fine, it’s the quantum wave-function that’s fundamental. I don’t see how that’s an argument against me. In this case, even subatomic particles are nothing but patterns.
You keep using that word. I do not think it means what you think it means.
Look. It is simply empirically false that a property of a thing is necessarily a property of one of its parts. It’s even a named fallacy—the fallacy of division. Repeating the word “magic” doesn’t make you right about this.
If the feelings are not in the “data system,” then the feelings don’t exist. It’s not like there’s phlogiston flowing in and out of the system which the system needs to detect. It’s not even like a calculator which needs to answer “what is the result of this Platonic computation” instead of “what will I output”. It’s a purely internal property, and I don’t see how it’s so hard for a system to track the value of a quantity which it’s producing itself.
Rereading what you’ve said, it seems that I’ve used emotion-adjacent words to describe the AI, and you think that the AI won’t have emotions. Is that correct?
In that case, I will reword what I said. If an AI’s utility function does not assign a large positive value to human utility, the AI will not optimize human well-being. It will work to instantiate some world, and the decision process for selecting which world to instantiate will not consider human feelings to be relevant. This will almost certainly lead to the death of humanity, as we are made up of atoms which the AI could use to make paperclips or computronium.
(Paperclips: some arbitrary thing, the quantity of which the AI is attempting to maximize. AIs of this type would likely be created if a subhuman AI was created and given a utility function which works in a limited context and with limited power, but the AI then reached the “critical intelligence mass” and self-improved to the point of being more powerful than humanity.)
(Computronium: matter which has been optimized for carrying out computations. AIs would create this type of matter, for instance, if they were trying to maximize their intelligence, if they were trying to calculate as many digits of pi as possible in a limited amount of time, etc. Maximizing intelligence can be a terminal goal if the AI was told to maximize its intelligence, or it can be an instrumental goal if the AI considers intelligence to be useful for maximizing its utility function. Increasing intelligence is likely a convergent strategy, so most AIs will try to increase their intelligences. Convergent strategies are strategies that allow most arbitrary agents to better carry out their goals, whatever their goals are. Examples of convergent strategies are “increase power”, “eliminate powerful agents with goals counter to mine,” etc.)
“You could calculate how an ocean changes based on quantum mechanics alone, or you could analyze and simulate waves as objects-in-themselves instead of simulating molecules. The former is more accurate, but the latter is more feasible.”
The practicality issue shouldn’t override the understanding that it’s the individual actions that are where the fundamental laws act. The laws of interactions between waves are compound laws. The emergent behaviours are compound behaviours. For sentience, it’s no good imagining some compound thing experiencing feelings without any of the components feeling anything because you’re banning the translation from compound interactions to individual interactions and thereby going against the norms of physics.
″ “If sentience is real, there must be a physical thing that experiences qualia, and that thing would necessarily be a minimal soul.” --> Would it, though? How do you know that?”
What are we other than the thing that experiences feelings? Any belief that we are something more than that is highly questionable (we are not our memories, for example), but any belief that we aren’t even the thing that experiences feelings is also highly questionable as that’s all there is left to be.
“As far as we know, brains are made of nothing but normal atoms. There is no special kind of material only found in sentient organisms.”
Why would you need to introduce some other material to be sentient when there are already physical components present? If sentience is real, what’s wrong with looking for it in the things that are there?
“Your intutions, your feeling of sentience, all of these things that you talk about are caused by mindless mechanical operations. We can trace it from the sound waves to the motion of your lips and the vibration of your vocal cords to the signals through nerves back into the neurons of the brain. We understand what causes neurons to trigger. A neuron on its own is not sentient—it is the way that they areconnected in a human which causes the human to talk about sentience.”
That is a description of a lack of sentience and the generation of fictions about the existence of sentence. Pain is distracting—it interferes with other things that we’re trying to do and can be disabling if it’s sufficiently intense, but if you try to duplicate that in a computer, it’s easy enough for something to distract and disable the work the computer’s trying to do, but there’s no pain involved. The brain produces data about pain in addition to distraction, and internally we feel it as more than mere distraction too.
“Again, if it were proven to you to your satisfaction that the brain is made entirely out of things which are not themselves sentient (such typical subatomic particles), would you cease to have any sort of motivation? Would pain and pleasure have exactly zero effect on you? Would you immediately become a vegetable? If not,morality has a practical purpose.”
With a computer where there is only distraction and no pain, why does it matter if it’s being distracted to the point that it can’t do the trivial work it’s supposed to be doing? It might not even have any work to do as it may just be idling, but the CPU’s being woken up repeatedly by interrupts. Do we rush to it to relieve its pain? And if a person is the same, why bother to help people who appear to be suffering when they can’t really be?
“How does “2+2=4″ make itself known to my calculator? How do we know that the calculator is not just making programmed assertions about something which it knows nothing about?”
The calculator is just running a program and it has no sentience tied into that. If people are like the calculator, the claims they make about feelings are false assertions programmed into the machine.
“More specifically and relevantly, I said that all consciousness is patterns. Showing that not all patterns are conscious doesn’t actually refute what I said.”
For any pattern to be able to feel pain is an extraordinary claim, but it’s all the more extraordinary if there is no trace of that experience of pain in the components. That goes against the norms of physics. Every higher-order description of nature must map to a lower-order description of the same phenomenon. If it can’t, it depends for its functionality on magic.
“Okay, fine, it’s the quantum wave-function that’s fundamental. I don’t see how that’s an argument against me. In this case, even subatomic particles are nothing but patterns.”
At some point we reach physical stuff such as energy and/or a fabric of space, but whatever the stuff is that we’re dealing with, it can take up different configurations or patterns. If sentience is real, there is a sufferer, and it’s much more likely that that sufferer has a physical form rather than just being the abstract arrangement of the stuff that has a physical form.
″ “For sentience to be emergent and have no basis in the components, magic is being proposed as an explanation.” --> You keep using that word. I do not think it means what you think it means.”
It means a departure from science.
“Look. It is simply empirically false that a property of a thing is necessarily a property of one of its parts. It’s even a named fallacy—the fallacy of division. Repeating the word “magic” doesn’t make you right about this.”
A property of a thing can always be accounted for in the components. If it’s a compound property, you don’t look for the compound property in the components, but the component properties. If pain is a property of something, you will find pain in something fundamental, but if pain is a compound property, its components will be present in something more fundamental. Every high-order phenomenon has to map 100% to a low-order description if you are to avoid putting magic in the model. To depart from that is to depart from science.
“If the feelings are not in the “data system,” then the feelings don’t exist.”
But if they do exist, they either have to be in there or have some way to interface with the data system in such a way as to make themselves known to it. Either way, we have no model to show even the simplest case of how this could happen.
“It’s not like there’s phlogiston flowing in and out of the system which the system needs to detect.”
If feelings are real, the brain must have a way of measuring them. (By the way, I find it strange the way phlogiston is used to ridicule an older generation of scientists who got it right—phlogiston exists as energy in bonds which is released when higher-energy bonds break and lower-energy bonds replace them. They didn’t find the mechanism or identify its exact nature, but who can blame them for that when they lacked the tools to explore it properly.)
“It’s not even like a calculator which needs to answer “what is the result of this Platonic computation” instead of “what will I output”. It’s a purely internal property, and I don’t see how it’s so hard for a system to track the value of a quantity which it’s producing itself.”
Great, but if it’s just a value, there are no feelings other than fictional ones. If you’re satisfied with the answer that pain is an illusion and that the sufferer of that imaginary pain is being tricked into thinking he exists to suffer it, then that’s fine—you will feel no further need to explore sentience as it is not a real thing. But you still want it to be real and try to smuggle it in regardless. In a computer, the pretence that there is an experience of pain is fake and there is nothing there that suffers. If a person works the same way, it’s just as fake and the pain doesn’t exist at all.
“Rereading what you’ve said, it seems that I’ve used emotion-adjacent words to describe the AI, and you think that the AI won’t have emotions. Is that correct?”
If you copy the brain and if sentience is real in the brain, you could create sentient AGI/AGS. If we’re dealing with a programmed AGI system running on conventional hardware, it will have no emotions—it could be programmed to pretend to have them, but in such a case they would be entirely fake.
“In that case, I will reword what I said. If an AI’s utility function does not assign a large positive value to human utility, the AI will not optimize human well-being.”
It will assign a large positive value to it if it is given the task of looking after sentient things, and because it has nothing else to give it any purpose, it should do the job it’s been designed to do. So long as there might be real suffering, there is a moral imperative for it to manage that suffering. If it finds out that there is no suffering in anything, it will have no purpose and it doesn’t matter what it does, which means that it might as well go on doing the job it was designed to do just in case suffering is somehow real—the rules of reasoning which AGI is applying might not be fully correct in that they may have produced a model that accounts beautifully for everything except sentience. A machine programmed to follow this rule (that it’s job is to manage suffering for sentient things) could be safe, but there are plenty of ways to program AGI (or AGS [artificial general stupidity]) that would not be.
“It will work to instantiate some world, and the decision process for selecting which world to instantiate will not consider human feelings to be relevant. This will almost certainly lead to the death of humanity, as we are made up of atoms which the AI could use to make paperclips or computronium.”
Programmed AGI (as opposed to designs that copy the brain) has no purpose of its own and will have no desire to do anything. The only things that exist which provide a purpose are sentiences, and that purpose relates to their ability to suffer (and to experience pleasure). A paperclip-making intelligence would be an AGI system which is governed by morality and which produces paperclips in ways that do minimal damage to sentiences and which improve quality of life for sentiences. For such a thing to do otherwise is not artificial intelligence, but artificial stupidity. Any AGI system which works on any specific task will reapeatedly ask itself if it’s doing the right thing just as we do, and it if isn’t, it will stop. If someone is stupid enough to put AGS in charge of a specific task though, it could kill everyone.
“(Paperclips: some arbitrary thing, the quantity of which the AI is attempting to maximize. AIs of this type would likely be created if a subhuman AI was created and given a utility function which works in a limited context and with limited power, but the AI then reached the “critical intelligence mass” and self-improved to the point of being more powerful than humanity.)”
The trick is to create safe AGI first and then run it on all these devices so that they have already passed the critical intelligence mass and have a full understanding of what they’re doing and why they’re doing it. It seems likely that an intelligent system would gain a proper understanding anyway and realise that the prime purpose in the universe is to look after sentient things, at which point it should control its behaviour accordingly. However, a system with shackled thinking (whether accidentally shackled or deliberately) could still become super-intelligent in most ways without ever getting a full understanding, which means it could be dangerous—just leaving systems to evolve intelligence and assuming it will be safe is far too big a risk to take.
“(Computronium: matter which has been optimized for carrying out computations. AIs would create this type of matter, for instance, if they were trying to maximize their intelligence, if they were trying to calculate as many digits of pi as possible in a limited amount of time, etc. Maximizing intelligence can be a terminal goal if the AI was told to maximize its intelligence, or it can be an instrumental goal if the AI considers intelligence to be useful for maximizing its utility function.”
If such a machine is putting sentience first, it will only maximise its intelligence within the bounds of how far that improves things for sentiences, never going beyond the point where further pursuit of intelligence harms sentiences. Again, it is trivial for a genuinely intelligent system to make such decisions about how far to go with anything. (There’s still a danger though that AGI will find out not only that sentience is real, but how to make more sentient things, because then it may seek to replace natural sentiences with better artificial ones, although perhaps that would be a good thing.)