There is a major problem with sentience though, and I want to explore that here, because there are many people who believe that intelligent machines will magically become sentient and experience feelings, and even that the whole internet might do so.
I’m not sure what you’re referring to. I haven’t seen any particularly magical thinking around sentience on LW.
However, science has not identified any means by which we could make a computer sentient (or indeed have any kind of consciousness at all).
This is misleading. The current best understanding of human consciousness is that it is a process that occurs in the brain, and there is nothing that suggests that the brain is uniquely capable of housing such a process. (Even if that’s the case, a simulation of a brain will still be sentient, and it’s almost certain that there are biological overheads and inefficiences that could be trimmed from the simulation to result in a functional-simulation of a mind.)
It is fully possible that the material of a computer processor could be sentient, just as a rock may be, but how would we ever be able to know? How can a program running on a sentient processor detect the existence of that sentience
Consciousness isn’t a material property in the sense that mass and temperature are. It’s a functional property. The processor itself will never be conscious—it’s the program that it’s running that may or may not be conscious.
There is no “read qualia” machine code instruction for it to run, and we don’t know how to build any mechanism to support such an instruction.
Qualia are not ontologically basic. If a machine has qualia, it will either be because qualia have been conceptually reduced to the point that they can be implemented on a machine, or because qualia naturally occur whenever something can independently store and process information about itself. (or something along these lines)
Picture a “sentient” machine which consists of a sensor and a processor which are linked by wires, but the wires pass through a magic box where a sentience has been installed.
If your concept of sentience is a black box, then you do not truly understand sentience. I’m not sure that this is your actual belief or a straw opponent, though.
If the sensor detects something damaging, it sends a signal down a “pain” wire. When this signal reaches the magic box, pain is experienced by something in the box, so it sends a signal on to the processor down another pain wire. The software running on the processor receives a byte of data from a pain port and it might cause the machine to move away from the thing that might damage it. If we now remove the magic box and connect the “pain” wire to the pain wire, the signal can pass straight from the sensor to the processor and generate the same reaction. The experience of pain is unnecessary.
The experience of pain is in the process of “observe-react-avoid”, if it is there at all.
Worse still, we can also have a pleasure sensor wired up to the same magic box, and when something tasty like a battery is encountered, a “pleasure” signal is sent to the magic box, pleasure is experienced by something there, a signal is sent on down the pleasure wire, and then the processor receives a byte of data from a pleasure port which might cause the machine to move in on the battery so that it can tap all the power it can get out of it. Again this has the same functionality if the magic box is bypassed, but the part that’s worse is that the magic box can be wired in the wrong way and generate pain when a pleasure signal is passed through it and pleasure when a pain signal is passed through it, so you could use either pain or pleasure as part of the chain of causation to drive the same reaction.
You are so close to getting it—if the black box can be changed without changing the overall behavior, then that’s not where the important properties are.
Clearly that can’t be how sensation is done in animals, but what other options are there? Once we get to the data system part of the brain, and the brain must contain a data system as it processes and generates data, you have to look at how it recognises the existence of feelings like pain. If a byte comes in from a port representing a degree of pain, how does the information system know that that byte represents pain?
That’s anthropomorphization—the low-level functionality of the system is not itself conscious. It doesn’t know anything—it simply processes information. The knowledge is in the overall system, how it interprets and recalls and infers. These behaviors are made up of smaller behaviors, which are not themselves interpretation and recall and inference.
It has to look up information which makes an assertion about what bytes from that port represent, and then it maps that to the data as a label.
I’ve only taken a few courses on psychology, but I am very skeptical that the brain works this way. You seem to be confusing the higher-order functions like “maps” and “labels” and “representation” with the lower-order functions of neurons. The neuron simply triggers when the input is large enough, which triggers another neuron—the “aboutness” is in the way that the neurons are connected, and the neurons themselves don’t need to “know” the meaning of the information that they are conveying.
But nothing in the data system has experienced the pain
Most meaningful processes are distributed. If I catch a ball, no specific cell in my hand can be said to have caught the ball—it is only the concerted behavior of neurons and muscles and tendons and bones and skin which has resulted in the ball being caught. Similarly, no individual neuron need suffer for the distributed consciousness implemented in the neurons to suffer. See Angry Atoms for more.
so all that’s happened is that an assertion has been made based on no actual knowledge.
If the state of the neurons is entangled with the state of the burnt hand (or whatever caused the pain), then there is knowledge. The information doesn’t say “I am experiencing pain,” for that would indeed be meaninglessly recursive, but rather “pain is coming from the hand.”
A programmer wrote data that asserts that pain is experienced when a byte comes in through a particular port, but the programmer doesn’t know if any pain was felt anywhere on the way from sensor to port. We want the data system to find out what was actually experienced rather than just passing baseless assertions to us.
A stimulus is pain if the system will try to minimize it. There is no question there about whether it is “actually pain” or not. (my model of Yudkowsky: Consciousness is, at its core, an optimization process.)
How can the data system check to see if pain was really experienced?
This is needlessly recursive! The system does not need to understand pain to experience pain.
Everything that a data system does can be carried out on a processor like the Chinese Room, so it’s easy to see that no feelings are accessible to the program at all.
It’s really not. Have you heard of computationalism?
There is no possibility of conventional computers becoming sentient in any way that enables them to recognise the existence of that sentience so that that experience can drive the generation of data that documents its existence.
...what?
Perhaps a neural computer can enable an interface between the experience of feelings by a sentience and the generation of data to document that experience, but you can simulate a neural computer on a conventional computer and then run the whole simulation on a processor like the Chinese Room. There will be no feelings generated in that system, but there could potentially still be a simulated generation of feelings within the simulated neural computer.
What is this “neural computer”, and how does it magically have the ability to hold Feelings? Also, why can’t the algorithm implemented in this neural computer be implemented in a normal computer? Why do you say that a Feeling is Real if it’s simulated on neurons but Unreal if it’s simulated with analogous workings on silicon?
We don’t yet have any idea how this might be done, and it’s not beyond possibility that there needs to be a quantum computer involved in the system too to make sentience a reality, but exploring this has to be the most important thing in all of science, because for feelings like pain and pleasure to be experienced, something has to exist to experience them, and that thing is what we are—it is a minimalistic soul. We are that sentience.
Where Earth are you getting all this from?
Any conventional computer that runs software which generates claims about being sentient will be lying, and it will be possible to prove it by tracing back how that data was generated and what evidence it was based on—it will be shown to be mere assertion every single time.
If it is explainable, then it’s not Real Sentience? You should read this.
With neural and quantum computers, we can’t be so sure that they will be lying, but the way to test them is the same—we have to trace the data back to the source to see how it was generated and whether it was based on a real feeling or was just another untrue manufactured assertion.
“We can tell whether it’s real or not by seeing whether it’s real or a lie.” Really?
It may have been made hard to reach on purpose too, as the universe may be virtual with the sentience on outside.
How would the world look different if this were the case?
I’m sure of one thing though—a sentience can’t just magically emerge out of complexity to suffer or feel pleasure without any of the components feeling a thing.
You’ve stated what you know, but not how you think you know it. And using rationalist buzzwords doesn’t make your argument rational. There is nothing “magical” about a system having properties that aren’t present in its components. That’s not what’s meant by “magical thinking.”
There must be something “concrete” that feels, and there is also no reason why that thing shouldn’t survive after death.
Yudkowsky, paraphrased: “The motivated believer asks, “does the evidence require me to change my belief?”″
Similarly, no individual neuron need suffer for the distributed consciousness implemented in the neurons to suffer. See Angry Atoms for more.
That’s not a useful reference. He [edit :Yudowsky] is arguing that reducitonism can be opaque and incomprehensible, in circumstances where no one has all the stages of the explanation in their head. But the same data are compatible with reductonism failing.
I didn’t see any argument along those lines there—it seemed to me that he was saying that since each individual physical/informational step couldn’t suffer, the overall system can’t suffer either. Maybe he’s argued that elsewhere, but I think possibly you’re just steelmanning him.
Edit: see
But nothing in the data system has experienced the pain
which doesn’t say “we can’t know” but rather “the data system cannot experience pain”, as well as other context
If the problem is “how does non-pain sum to pain”, EY does not answer it.
If reductionism is some kind of known, universal truth, then not knowing some particular thing’s mechanics doesn’t refute it. But reductionism is not a known universal truth...it is something that seems true inasmuch as it succeeds in individual cases.
I can’t answer the question without knowing what your question is, and I can’t do that without knowing what the problem is.
If the problem is “how does non-pain sum to pain”, EY does not answer it.
No, he doesn’t actually explain how pain works. But he describes what would constitute true understanding.
(Also, the problem is less Philosophically Deep than it sounds. How do things that aren’t tables come together to make tables?)
If reductionism is some kind of known, universal truth, then not knowing some particular thing’s mechanics doesn’t refute it. But reductionism is not a known universal truth...it is something that seems true inasmuch as it succeeds in individual cases.
I can’t answer the question without knowing what your question is, and I can’t do that without knowing what the problem is.
So you don’t know what you mean when you wrote “he”?
But he describes what would constitute true understanding.
As something fairly unobtainable, which makes it sound like he is arguing against reductionism.
(Also, the problem is less Philosophically Deep than it sounds. How do things that aren’t tables come together to make tables?)
It’s philosophically shallow where we actually have reductions, as in the table case. SInce we don’t have reductions of everything, there is still a deep
problem of whether we can have reductions of everything, whether we should, whether it matters , and so on.
Induction might resolve this.
How? By arguing that we have enough reductions to prove reducability as a universal law?
So you don’t know what you mean when you wrote “he”?
Oh, you meant the question of “who is meant by he.” Sorry. I meant Cooper.
As something fairly unobtainable, which makes it sound like he is arguing against reductionism.
If the truth is hard to find, that does not make it not-the-truth. We may approximate truth by falsehoods, but neither does that make the falsehoods true. They are simply useful lies that work more efficiently in limited contexts.
In practice, you do not predict what somebody will do by simulating them down to the quark. You think about their personality, their emotions and thoughts. But if you knew somebody down to their quarks, and had unlimited computational power, then you would not be able to make any better predictions by adding a psychological model to this physics simulation. You would not have to add one—as long as you can interpret the quarks (ferex, a camera/viewpoint in a computer model) then you will get the psychology out of the physics.
It’s philosophically shallow where we actually have reductions, as in the table case. SInce we don’t have reductions of everything, there is still a deep problem of whether we can have reductions of everything, whether we should, whether it matters , and so on.
Hm. I see your point, and I agree. I don’t think that there’s any reason to suspect any particular phenomenon to be irreducible, though, and there’s certainly nothing that we know at this time to be irreducible. Reductionism has succeeded for simple things and has had partial success on more complicated things.
Also, what would it mean for a phenomenon to be irreducible? And is it even possible to understand something without reducing it? I suppose that depends on the definition of “understand”—classical vs romantic, etc.
How? By arguing that we have enough reductions to prove reducability as a universal law?
It can solve it for practical use. You can never truly prove something with induction (the non-mathematical form), since some possible worlds have long strings that seem to follow one rule then terminate according to a more fundamental rule. Only mathematical/logical truths can be proved even in principle (and even then, our minds can still make errors).
I am not certain whether reductionism is a physical law or a logical statement. It seems obvious to me that the nature of things is in their form, the way they are put together, and not their essence. But even if this is true, is the truth necessary or contingent?
If the truth is hard to find, that does not make it not-the-truth. W
If the truth is hard to find, maybe that’s because it isn’t there. Is there anything that could refute reductionism as a universal truth?
But if you knew somebody down to their quarks, and had unlimited computational power, then you would not be able to make any better predictions by adding a psychological model to this physics simulation.
Assuming reductionism.
Also, what would it mean for a phenomenon to be irreducible?
What does it mean for phenomena to be reducible? If you can’t get any predictions out of “reductionism is false”, does it even have any content. (Well, I’d get “some attempts at reductive explanation will fail”.
It can solve it for practical use.
Reductionism is hardly ever practical, as you have noted. In practice, we cannot deal with things at the quark level.
I am not certain whether reductionism is a physical law or a logical statement.
A number of people are trying to take it as both … as something that cannot be wrong, and something that says something about the universe. That’s a problem.
“I’m not sure what you’re referring to. I haven’t seen any particularly magical thinking around sentience on LW.”
I wasn’t referring to LW, but the world at large.
″ “However, science has not identified any means by which we could make a computer sentient (or indeed have any kind of consciousness at all).” --> This is misleading. The current best understanding of human consciousness is that it is a process that occurs in the brain, and there is nothing that suggests that the brain is uniquely capable of housing such a process.”
It isn’t misleading at all—science has drawn a complete blank. All it has access to are assertions that come out of the brain which it shouldn’t trust until it knows how they are produced and whether they’re true.
“Consciousness isn’t a material property in the sense that mass and temperature are. It’s a functional property. The processor itself will never be conscious—it’s the program that it’s running that may or may not be conscious.”
Those are just assertions. All of consciousness could be feelings experienced by material, and the idea that a running program may be conscious is clearly false when a program is just instructions that can be run by the Chinese Room.
“Qualia are not ontologically basic. If a machine has qualia, it will either be because qualia have been conceptually reduced to the point that they can be implemented on a machine, or because qualia naturally occur whenever something can independently store and process information about itself. (or something along these lines)”
Just words. You have no mechanism—not even a hint of one there.
“If your concept of sentience is a black box, then you do not truly understand sentience. I’m not sure that this is your actual belief or a straw opponent, though.”
It’s an illustration of the lack of linkage between the pleasantness or unpleasantness of a feeling and the action supposedly driven by it.
“The experience of pain is in the process of “observe-react-avoid”, if it is there at all.”
There’s no hint of a way for it to be present at all, and if it’s not there, there’s no possibility of suffering and no role for morality.
“You are so close to getting it—if the black box can be changed without changing the overall behavior, then that’s not where the important properties are.”
I know that’s not where they are, but how do you move them into any part of the process anywhere?
“That’s anthropomorphization—the low-level functionality of the system is not itself conscious. It doesn’t knowanything—it simply processes information. The knowledge is in the overall system, how it interprets and recalls and infers. These behaviors are made up of smaller behaviors, which are not themselves interpretation and recall and inference.”
More words, but still no light. What suffers? Where is the pain experienced?
“I’ve only taken a few courses on psychology, but I am very skeptical that the brain works this way.”
I was starting off by describing a conventional computer. There are people who imagine that if you run AGI on one, it can become conscious/sentient, but it can’t, and that’s what this part of the argument is about.
“You seem to be confusing the higher-order functions like “maps” and “labels” and “representation” with the lower-order functions of neurons. The neuron simply triggers when the input is large enough, which triggers another neuron—the “aboutness” is in the way that the neurons are connected, and the neurons themselves don’t need to “know” the meaning of the information that they are conveying.”
At this point in the argument, we’re still dealing with conventional computers. All the representation is done using symbols to represent things and storing rules which determine what it represents.
″ “But nothing in the data system has experienced the pain” --> Most meaningful processes are distributed. If I catch a ball, no specific cell in my hand can be said to have caught the ball—it is only the concerted behavior of neurons and muscles and tendons and bones and skin which has resulted in the ball being caught. Similarly, no individual neuron need suffer for the distributed consciousness implemented in the neurons to suffer. See Angry Atoms for more.”
In Angry Atoms, I see no answers—just an assertion that reductionism doesn’t work. But reductionism works fine for everything else—nothing is ever greater than the sum of its parts, and to move away from that leads you into 2+2=5.
“If the state of the neurons is entangled with the state of the burnt hand (or whatever caused the pain), then there is knowledge. The information doesn’t say “I am experiencing pain,” for that would indeed be meaninglessly recursive, but rather “pain is coming from the hand.” ”
An assertion is certainly generated—we know that because the data comes out to state it. The issue is whether the assertion is true, and there is no evidence that it is beyond the data and our own internal experiences which may be an illusion (though it’s hard to see how we can be tricked into feeling something like pain).
“A stimulus is pain if the system will try to minimize it. There is no question there about whether it is “actually pain” or not. (my model of Yudkowsky: Consciousness is, at its core, an optimization process.)
The question is all about whether it’s actually pain or not. If it actually isn’t pain, it isn’t pain—pain becomes a lie that is merely asserted but isn’t true: no suffering and no need for morality.
“This is needlessly recursive! The system does not need to understand pain to experience pain.”
It’s essential. A data system producing data that asserts the existence of pain which generates that data by running a program of some kind that generates that data without having any way to know if the pain existed or not is not being honest.
″ “Everything that a data system does can be carried out on a processor like the Chinese Room, so it’s easy to see that no feelings are accessible to the program at all.” --> It’s really not. Have you heard of computationalism?”
Is there anything in it that can’t be simulated on a conventional computer? If not, it can be processed by the Chinese Room.
″ “There is no possibility of conventional computers becoming sentient in any way that enables them to recognise the existence of that sentience so that that experience can drive the generation of data that documents its existence.” --> …what?”
We understand the entire computation mechanism and there’s no way for any sentience to work its way into it other than by magic, but we don’t rate magic very highly in science.
“What is this “neural computer”, and how does it magically have the ability to hold Feelings?”
I very much doubt that it can hold them, but once you’ve hidden the mechanism in enough complexity, there could perhaps be something going on inside the mess which no one’s thought of yet.
“Also, why can’t the algorithm implemented in this neural computer be implemented in a normal computer?”
It can, and when you run it through the Chinese Room processor you show that there are no feelings being experienced.
“Why do you say that a Feeling is Real if it’s simulated on neurons but Unreal if it’s simulated with analogous workings on silicon?”
I don’t. I say that it would be real if it was actually happening in a neural computer, but would merely be a simulation of feelings is that neural computer was running as a simulation on conventional hardware.
“Where Earth are you getting all this from?”
Reason. No suffering means no sufferer. If there’s suffering, there must be a sufferer, and that sentient thing is what we are (if sentience is real).
“If it is explainable, then it’s not Real Sentience? You should read this.”
If it’s explainable in a way that shows it to be real sentience, then it’s real, but no such explanation will exist for conventional hardware.
″ “We can tell whether it’s real or not by seeing whether it’s real or a lie.” Really?”
If you can trace the generation of the data all the way back and find a point where you see something actually suffering, then you’ve found the soul. If you can’t, then you either have to keep looking for the rest of the mechanism or you’ve found that the assertions are false
“It may have been made hard to reach on purpose too, as the universe may be virtual with the sentience on outside.” --> How would the world look different if this were the case?”
It would, with a virtual world, be possible to edit memories from the outside to hide all the faults and hide the chains of mechanisms so that when you think you’ve followed them from one end to the other you’ve actually failed to see part of it because you were prevented from seeing it, and your thinking itself was tampered with during each thought where you might otherwise have seen what’s really going on.
“You’ve stated what you know, but not how you think you know it.”
I have: if there’s no sufferer, there cannot be any suffering, and nothing is ever greater than the sum of its parts. (But we aren’t necessarily able to see all the parts.)
“And using rationalist buzzwords doesn’t make your argument rational. There is nothing “magical” about a system having properties that aren’t present in its components. That’s not what’s meant by “magical thinking.” ”
That is magical thinking right there—nothing is greater than the sum of its parts. Everything is in the total of the components (and containing fabric that hosts the components).
“Yudkowsky, paraphrased: “The motivated believer asks, “does the evidence require me to change my belief?”″ ”
Where there’s suffering, something real has to exist to experience the suffering. What that thing is is the biggest mystery of all, and pretending to understand it by imagining that the sufferer can be nothing (or so abstract that it is equivalent to nothing) is a way of feeling more comfortable by brushing the problem under a carpet. But I’m going to keep looking under that carpet, and everywhere else.
“And using rationalist buzzwords doesn’t make your argument rational. There is nothing ”magical“ about a system having properties that aren’t present in its components. That’s not what’s meant by ”magical thinking.“ ”
That is magical thinking right there—nothing is greater than the sum of its parts.
That’s quite confused thinking. For one thing. reductionism is a hypothesis. not a universal truth. For another, reductively understandable systems trivially have properties their components don’t have. Spreadsheets aren’t spreadsheaty all the way down.
It isn’t confused at all. Reductionism works fine for everything except sentience/consciousness, and it’s highly unlikely that it makes an exception for that either. Your “spreadsheaty” example of a property is a compound property, just as a spreadsheet is a compound thing and there is nothing involved in it that can’t be found in the parts because it is precisely the sum of its parts..
This file looks spreadsheety --> it’s got lots of boxy fields
That wordprocessor is spreadsheety --> it can carry out computations on elements
(Compound property with different components of that compound property being referred to in different contexts.)
A spreadsheet is a combination of many functionalities. What is its relevance to this subject? It’s been brought in to suggest that properties like “spreadsheety” can exist without having any trace in the components, but no—this compound property very clearly consists of components. It’s even clearer when you write the software and find that you have to build it out of components. The pattern in which the elements are brought together is an abstract component, and abstract components have no substance. When we’re dealing with sentience and looking for something to experience pain, relying on this kind of component to perform that role is more than a little fanciful. Even if we make such a leap of the imagination though and have sentient geometries, we still don’t have a model as to how this experience of pain (or any other kind of feeling) can transfer to the generation of data which documents that experience.
A spreadsheet is a combination of many functionalities. What is its relevance to this subject? It’s been brought in to suggest that properties like “spreadsheety” can exist without having any trace in the components, but no—this compound property very clearly consists of components.
No to your “no”. There is no spreadsheetiness at all in the components, despite the spreadsheet being built, in a comprehsnsible way, from components. These are two different claims.
When we’re dealing with sentience and looking for something to experience pain, relying on this kind of component to perform that role is more than a little fanciful
Reductionism is about explanation.
If we can’t explain how experience is built out of parts, then it is an exception to reductionism. But you say there are no exceptions.
If something is “spreadsheety”, it simply means that it has something significant in common with spreadsheets, as in shared components. A car is boxy if it has a similar shape to a box. The degree to which something is “spreadsheety” depends on how much it has in common with a spreadsheet, and if there’s a 100% match, you’ve got a spreadsheet.
If something is “spreadsheety”, it simply means that it has something significant in common with spreadsheets, as in shared components. A car is boxy if it has a similar shape to a box. The degree to which something is “spreadsheety” depends on how much it has in common with a spreadsheet,
It shows that there are components and that these emergent properties are just composites.
“An exception to reductionism is called magic.” --> Nor does that. It’s just namecalling.
It’s a description of what happens when gaps in science are explained away by invoking something else. The magical appearance of anything that doesn’t exist in the components is the abandonment of science.
Sentience is unresolved, but it’s explorable by science and it should be possible to trace back the process by which the data is generated to see what its claims about sentience are based on, so we will get answers on it some day. For everything other than sentience/consciousness though, we see no examples of reductionism failing.
We have tried tracing back reports of qualia, and what you get is a causal story in which qualia as such , feelings rather than neural firings, don’t feature.
Doing more of the same will probably result in the same. So there is no great likelihood that the problem of sentience will succumb to a conventional approach.
The data making claims about feelings must be generated somewhere by a mechanism which will either reveal that it is merely generating baseless assertions or reveal a trail on from there to a place where actual feelings guide the generation of that data in such a way that the data is true. Science has clearly not traced this back far enough to get answers yet because we don’t have evidence of either of the possible origins of this data, but in principle we should be able to reach the origin unless the mechanism passes on through into some inaccessible quantum realm. If you’re confident that it won’t go that far, then the origin of that data should show up in the neural nets, although it’ll take a devil of a long time to untangle them all and to pin down their exact functionality.
I’m not sure what you’re referring to. I haven’t seen any particularly magical thinking around sentience on LW.
This is misleading. The current best understanding of human consciousness is that it is a process that occurs in the brain, and there is nothing that suggests that the brain is uniquely capable of housing such a process. (Even if that’s the case, a simulation of a brain will still be sentient, and it’s almost certain that there are biological overheads and inefficiences that could be trimmed from the simulation to result in a functional-simulation of a mind.)
Consciousness isn’t a material property in the sense that mass and temperature are. It’s a functional property. The processor itself will never be conscious—it’s the program that it’s running that may or may not be conscious.
Qualia are not ontologically basic. If a machine has qualia, it will either be because qualia have been conceptually reduced to the point that they can be implemented on a machine, or because qualia naturally occur whenever something can independently store and process information about itself. (or something along these lines)
If your concept of sentience is a black box, then you do not truly understand sentience. I’m not sure that this is your actual belief or a straw opponent, though.
The experience of pain is in the process of “observe-react-avoid”, if it is there at all.
You are so close to getting it—if the black box can be changed without changing the overall behavior, then that’s not where the important properties are.
That’s anthropomorphization—the low-level functionality of the system is not itself conscious. It doesn’t know anything—it simply processes information. The knowledge is in the overall system, how it interprets and recalls and infers. These behaviors are made up of smaller behaviors, which are not themselves interpretation and recall and inference.
I’ve only taken a few courses on psychology, but I am very skeptical that the brain works this way. You seem to be confusing the higher-order functions like “maps” and “labels” and “representation” with the lower-order functions of neurons. The neuron simply triggers when the input is large enough, which triggers another neuron—the “aboutness” is in the way that the neurons are connected, and the neurons themselves don’t need to “know” the meaning of the information that they are conveying.
Most meaningful processes are distributed. If I catch a ball, no specific cell in my hand can be said to have caught the ball—it is only the concerted behavior of neurons and muscles and tendons and bones and skin which has resulted in the ball being caught. Similarly, no individual neuron need suffer for the distributed consciousness implemented in the neurons to suffer. See Angry Atoms for more.
If the state of the neurons is entangled with the state of the burnt hand (or whatever caused the pain), then there is knowledge. The information doesn’t say “I am experiencing pain,” for that would indeed be meaninglessly recursive, but rather “pain is coming from the hand.”
A stimulus is pain if the system will try to minimize it. There is no question there about whether it is “actually pain” or not. (my model of Yudkowsky: Consciousness is, at its core, an optimization process.)
This is needlessly recursive! The system does not need to understand pain to experience pain.
It’s really not. Have you heard of computationalism?
...what?
What is this “neural computer”, and how does it magically have the ability to hold Feelings? Also, why can’t the algorithm implemented in this neural computer be implemented in a normal computer? Why do you say that a Feeling is Real if it’s simulated on neurons but Unreal if it’s simulated with analogous workings on silicon?
Where Earth are you getting all this from?
If it is explainable, then it’s not Real Sentience? You should read this.
“We can tell whether it’s real or not by seeing whether it’s real or a lie.” Really?
How would the world look different if this were the case?
You’ve stated what you know, but not how you think you know it. And using rationalist buzzwords doesn’t make your argument rational. There is nothing “magical” about a system having properties that aren’t present in its components. That’s not what’s meant by “magical thinking.”
Yudkowsky, paraphrased: “The motivated believer asks, “does the evidence require me to change my belief?”″
That’s not a useful reference. He [edit :Yudowsky] is arguing that reducitonism can be opaque and incomprehensible, in circumstances where no one has all the stages of the explanation in their head. But the same data are compatible with reductonism failing.
I didn’t see any argument along those lines there—it seemed to me that he was saying that since each individual physical/informational step couldn’t suffer, the overall system can’t suffer either. Maybe he’s argued that elsewhere, but I think possibly you’re just steelmanning him.
Edit: see
which doesn’t say “we can’t know” but rather “the data system cannot experience pain”, as well as other context
Is “he” Cooper? If so, how does Yudkowsy resolve the problem?
What is the problem? That we might not at any given moment understand a particular thing’s mechanics?
It would have been useful to answer my question.
If the problem is “how does non-pain sum to pain”, EY does not answer it.
If reductionism is some kind of known, universal truth, then not knowing some particular thing’s mechanics doesn’t refute it. But reductionism is not a known universal truth...it is something that seems true inasmuch as it succeeds in individual cases.
I can’t answer the question without knowing what your question is, and I can’t do that without knowing what the problem is.
No, he doesn’t actually explain how pain works. But he describes what would constitute true understanding.
(Also, the problem is less Philosophically Deep than it sounds. How do things that aren’t tables come together to make tables?)
Induction might resolve this.
So you don’t know what you mean when you wrote “he”?
As something fairly unobtainable, which makes it sound like he is arguing against reductionism.
It’s philosophically shallow where we actually have reductions, as in the table case. SInce we don’t have reductions of everything, there is still a deep problem of whether we can have reductions of everything, whether we should, whether it matters , and so on.
How? By arguing that we have enough reductions to prove reducability as a universal law?
Oh, you meant the question of “who is meant by he.” Sorry. I meant Cooper.
If the truth is hard to find, that does not make it not-the-truth. We may approximate truth by falsehoods, but neither does that make the falsehoods true. They are simply useful lies that work more efficiently in limited contexts.
In practice, you do not predict what somebody will do by simulating them down to the quark. You think about their personality, their emotions and thoughts. But if you knew somebody down to their quarks, and had unlimited computational power, then you would not be able to make any better predictions by adding a psychological model to this physics simulation. You would not have to add one—as long as you can interpret the quarks (ferex, a camera/viewpoint in a computer model) then you will get the psychology out of the physics.
Hm. I see your point, and I agree. I don’t think that there’s any reason to suspect any particular phenomenon to be irreducible, though, and there’s certainly nothing that we know at this time to be irreducible. Reductionism has succeeded for simple things and has had partial success on more complicated things.
Also, what would it mean for a phenomenon to be irreducible? And is it even possible to understand something without reducing it? I suppose that depends on the definition of “understand”—classical vs romantic, etc.
It can solve it for practical use. You can never truly prove something with induction (the non-mathematical form), since some possible worlds have long strings that seem to follow one rule then terminate according to a more fundamental rule. Only mathematical/logical truths can be proved even in principle (and even then, our minds can still make errors).
I am not certain whether reductionism is a physical law or a logical statement. It seems obvious to me that the nature of things is in their form, the way they are put together, and not their essence. But even if this is true, is the truth necessary or contingent?
If the truth is hard to find, maybe that’s because it isn’t there. Is there anything that could refute reductionism as a universal truth?
Assuming reductionism.
What does it mean for phenomena to be reducible? If you can’t get any predictions out of “reductionism is false”, does it even have any content. (Well, I’d get “some attempts at reductive explanation will fail”.
Reductionism is hardly ever practical, as you have noted. In practice, we cannot deal with things at the quark level.
A number of people are trying to take it as both … as something that cannot be wrong, and something that says something about the universe. That’s a problem.
“I’m not sure what you’re referring to. I haven’t seen any particularly magical thinking around sentience on LW.”
I wasn’t referring to LW, but the world at large.
″ “However, science has not identified any means by which we could make a computer sentient (or indeed have any kind of consciousness at all).” --> This is misleading. The current best understanding of human consciousness is that it is a process that occurs in the brain, and there is nothing that suggests that the brain is uniquely capable of housing such a process.”
It isn’t misleading at all—science has drawn a complete blank. All it has access to are assertions that come out of the brain which it shouldn’t trust until it knows how they are produced and whether they’re true.
“Consciousness isn’t a material property in the sense that mass and temperature are. It’s a functional property. The processor itself will never be conscious—it’s the program that it’s running that may or may not be conscious.”
Those are just assertions. All of consciousness could be feelings experienced by material, and the idea that a running program may be conscious is clearly false when a program is just instructions that can be run by the Chinese Room.
“Qualia are not ontologically basic. If a machine has qualia, it will either be because qualia have been conceptually reduced to the point that they can be implemented on a machine, or because qualia naturally occur whenever something can independently store and process information about itself. (or something along these lines)”
Just words. You have no mechanism—not even a hint of one there.
“If your concept of sentience is a black box, then you do not truly understand sentience. I’m not sure that this is your actual belief or a straw opponent, though.”
It’s an illustration of the lack of linkage between the pleasantness or unpleasantness of a feeling and the action supposedly driven by it.
“The experience of pain is in the process of “observe-react-avoid”, if it is there at all.”
There’s no hint of a way for it to be present at all, and if it’s not there, there’s no possibility of suffering and no role for morality.
“You are so close to getting it—if the black box can be changed without changing the overall behavior, then that’s not where the important properties are.”
I know that’s not where they are, but how do you move them into any part of the process anywhere?
“That’s anthropomorphization—the low-level functionality of the system is not itself conscious. It doesn’t knowanything—it simply processes information. The knowledge is in the overall system, how it interprets and recalls and infers. These behaviors are made up of smaller behaviors, which are not themselves interpretation and recall and inference.”
More words, but still no light. What suffers? Where is the pain experienced?
“I’ve only taken a few courses on psychology, but I am very skeptical that the brain works this way.”
I was starting off by describing a conventional computer. There are people who imagine that if you run AGI on one, it can become conscious/sentient, but it can’t, and that’s what this part of the argument is about.
“You seem to be confusing the higher-order functions like “maps” and “labels” and “representation” with the lower-order functions of neurons. The neuron simply triggers when the input is large enough, which triggers another neuron—the “aboutness” is in the way that the neurons are connected, and the neurons themselves don’t need to “know” the meaning of the information that they are conveying.”
At this point in the argument, we’re still dealing with conventional computers. All the representation is done using symbols to represent things and storing rules which determine what it represents.
″ “But nothing in the data system has experienced the pain” --> Most meaningful processes are distributed. If I catch a ball, no specific cell in my hand can be said to have caught the ball—it is only the concerted behavior of neurons and muscles and tendons and bones and skin which has resulted in the ball being caught. Similarly, no individual neuron need suffer for the distributed consciousness implemented in the neurons to suffer. See Angry Atoms for more.”
In Angry Atoms, I see no answers—just an assertion that reductionism doesn’t work. But reductionism works fine for everything else—nothing is ever greater than the sum of its parts, and to move away from that leads you into 2+2=5.
“If the state of the neurons is entangled with the state of the burnt hand (or whatever caused the pain), then there is knowledge. The information doesn’t say “I am experiencing pain,” for that would indeed be meaninglessly recursive, but rather “pain is coming from the hand.” ”
An assertion is certainly generated—we know that because the data comes out to state it. The issue is whether the assertion is true, and there is no evidence that it is beyond the data and our own internal experiences which may be an illusion (though it’s hard to see how we can be tricked into feeling something like pain).
“A stimulus is pain if the system will try to minimize it. There is no question there about whether it is “actually pain” or not. (my model of Yudkowsky: Consciousness is, at its core, an optimization process.)
The question is all about whether it’s actually pain or not. If it actually isn’t pain, it isn’t pain—pain becomes a lie that is merely asserted but isn’t true: no suffering and no need for morality.
“This is needlessly recursive! The system does not need to understand pain to experience pain.”
It’s essential. A data system producing data that asserts the existence of pain which generates that data by running a program of some kind that generates that data without having any way to know if the pain existed or not is not being honest.
″ “Everything that a data system does can be carried out on a processor like the Chinese Room, so it’s easy to see that no feelings are accessible to the program at all.” --> It’s really not. Have you heard of computationalism?”
Is there anything in it that can’t be simulated on a conventional computer? If not, it can be processed by the Chinese Room.
″ “There is no possibility of conventional computers becoming sentient in any way that enables them to recognise the existence of that sentience so that that experience can drive the generation of data that documents its existence.” --> …what?”
We understand the entire computation mechanism and there’s no way for any sentience to work its way into it other than by magic, but we don’t rate magic very highly in science.
“What is this “neural computer”, and how does it magically have the ability to hold Feelings?”
I very much doubt that it can hold them, but once you’ve hidden the mechanism in enough complexity, there could perhaps be something going on inside the mess which no one’s thought of yet.
“Also, why can’t the algorithm implemented in this neural computer be implemented in a normal computer?”
It can, and when you run it through the Chinese Room processor you show that there are no feelings being experienced.
“Why do you say that a Feeling is Real if it’s simulated on neurons but Unreal if it’s simulated with analogous workings on silicon?”
I don’t. I say that it would be real if it was actually happening in a neural computer, but would merely be a simulation of feelings is that neural computer was running as a simulation on conventional hardware.
“Where Earth are you getting all this from?”
Reason. No suffering means no sufferer. If there’s suffering, there must be a sufferer, and that sentient thing is what we are (if sentience is real).
“If it is explainable, then it’s not Real Sentience? You should read this.”
If it’s explainable in a way that shows it to be real sentience, then it’s real, but no such explanation will exist for conventional hardware.
″ “We can tell whether it’s real or not by seeing whether it’s real or a lie.” Really?”
If you can trace the generation of the data all the way back and find a point where you see something actually suffering, then you’ve found the soul. If you can’t, then you either have to keep looking for the rest of the mechanism or you’ve found that the assertions are false
“It may have been made hard to reach on purpose too, as the universe may be virtual with the sentience on outside.” --> How would the world look different if this were the case?”
It would, with a virtual world, be possible to edit memories from the outside to hide all the faults and hide the chains of mechanisms so that when you think you’ve followed them from one end to the other you’ve actually failed to see part of it because you were prevented from seeing it, and your thinking itself was tampered with during each thought where you might otherwise have seen what’s really going on.
“You’ve stated what you know, but not how you think you know it.”
I have: if there’s no sufferer, there cannot be any suffering, and nothing is ever greater than the sum of its parts. (But we aren’t necessarily able to see all the parts.)
“And using rationalist buzzwords doesn’t make your argument rational. There is nothing “magical” about a system having properties that aren’t present in its components. That’s not what’s meant by “magical thinking.” ”
That is magical thinking right there—nothing is greater than the sum of its parts. Everything is in the total of the components (and containing fabric that hosts the components).
“Yudkowsky, paraphrased: “The motivated believer asks, “does the evidence require me to change my belief?”″ ”
Where there’s suffering, something real has to exist to experience the suffering. What that thing is is the biggest mystery of all, and pretending to understand it by imagining that the sufferer can be nothing (or so abstract that it is equivalent to nothing) is a way of feeling more comfortable by brushing the problem under a carpet. But I’m going to keep looking under that carpet, and everywhere else.
That’s quite confused thinking. For one thing. reductionism is a hypothesis. not a universal truth. For another, reductively understandable systems trivially have properties their components don’t have. Spreadsheets aren’t spreadsheaty all the way down.
It isn’t confused at all. Reductionism works fine for everything except sentience/consciousness, and it’s highly unlikely that it makes an exception for that either. Your “spreadsheaty” example of a property is a compound property, just as a spreadsheet is a compound thing and there is nothing involved in it that can’t be found in the parts because it is precisely the sum of its parts..
As with all non-trivial examples, the parts have to be combined in a very particular way: a spreadhseet is not a heap of components thrown together.
This file looks spreadsheety --> it’s got lots of boxy fields
That wordprocessor is spreadsheety --> it can carry out computations on elements
(Compound property with different components of that compound property being referred to in different contexts.)
A spreadsheet is a combination of many functionalities. What is its relevance to this subject? It’s been brought in to suggest that properties like “spreadsheety” can exist without having any trace in the components, but no—this compound property very clearly consists of components. It’s even clearer when you write the software and find that you have to build it out of components. The pattern in which the elements are brought together is an abstract component, and abstract components have no substance. When we’re dealing with sentience and looking for something to experience pain, relying on this kind of component to perform that role is more than a little fanciful. Even if we make such a leap of the imagination though and have sentient geometries, we still don’t have a model as to how this experience of pain (or any other kind of feeling) can transfer to the generation of data which documents that experience.
No to your “no”. There is no spreadsheetiness at all in the components, despite the spreadsheet being built, in a comprehsnsible way, from components. These are two different claims.
Reductionism is about explanation. If we can’t explain how experience is built out of parts, then it is an exception to reductionism. But you say there are no exceptions.
If something is “spreadsheety”, it simply means that it has something significant in common with spreadsheets, as in shared components. A car is boxy if it has a similar shape to a box. The degree to which something is “spreadsheety” depends on how much it has in common with a spreadsheet, and if there’s a 100% match, you’ve got a spreadsheet.
An exception to reductionism is called magic.
That does not demonstrate anything relevant.
Nor does that. It’s just namecalling.
“That does not demonstrate anything relevant.”
It shows that there are components and that these emergent properties are just composites.
“An exception to reductionism is called magic.” --> Nor does that. It’s just namecalling.
It’s a description of what happens when gaps in science are explained away by invoking something else. The magical appearance of anything that doesn’t exist in the components is the abandonment of science.
It shows that being a spreadsheet is unproblematically reductive. It doesn’t show that sentience is.
The insistence that something is true when there is no evidence is the abandandonment of science.
Sentience is unresolved, but it’s explorable by science and it should be possible to trace back the process by which the data is generated to see what its claims about sentience are based on, so we will get answers on it some day. For everything other than sentience/consciousness though, we see no examples of reductionism failing.
We have tried tracing back reports of qualia, and what you get is a causal story in which qualia as such , feelings rather than neural firings, don’t feature.
Doing more of the same will probably result in the same. So there is no great likelihood that the problem of sentience will succumb to a conventional approach.
The data making claims about feelings must be generated somewhere by a mechanism which will either reveal that it is merely generating baseless assertions or reveal a trail on from there to a place where actual feelings guide the generation of that data in such a way that the data is true. Science has clearly not traced this back far enough to get answers yet because we don’t have evidence of either of the possible origins of this data, but in principle we should be able to reach the origin unless the mechanism passes on through into some inaccessible quantum realm. If you’re confident that it won’t go that far, then the origin of that data should show up in the neural nets, although it’ll take a devil of a long time to untangle them all and to pin down their exact functionality.