″ You seem to want to sidestep the question of “just what are the right answers to questions of morality and metaethics?”. I submit to you that this is, in fact, the critical question.”
I have never sidestepped anything. The right answers are the ones dictated by the weighing up of harm based on the available information (which includes the harm ratings in the database of knowledge of sentience). If the harm from one choice has a higher weight than another choice, that other choice is more moral. (We all have such a database in our heads, but each contains different data and can apply different weightings to the same things, leading to disagreements between us about what’s moral, but AGI will over time generate its own database which will end up being much more accurate than any of ours.)
“And have you managed to convince anyone that your ideas are correct?”
I’ve found a mixture of people who think it’s right and others who say it’s wrong and who point me towards alternatives which are demonstrably faulty.
“They are inferior because they get the wrong answers.”
Well, that’s what we need to explore, and we need to take it to a point where it isn’t just a battle of assertions and counter-assertions.
“I can easily show that your approach generates wrong answers. Observe: You say that “we have to stand by the principle that all sentiences are equally important”. But I don’t agree; I don’t stand by that principle, nor is there any reason for me to do so, as it is counter to my values.”
This may needs a new blog post to explore it fully, but I’ll try to provide a short version here. If a favourite relative of yours was to die and be reincarnated as a rat, you would, if you’re rational, want to treat that rat well if you knew who it used to be. You wouldn’t regard that rat as an inferior kind of thing that doesn’t deserve protection from people who might seek to make it suffer. It wouldn’t matter that your reincarnated relative has no recollection of their previous life—they would matter to you as much in that form as they would if they had a stroke and were reduced to similar capability to a rat and had lot all memory of who they were. The two things are equivalent and it’s irrational to consider one of them as being in less need of protection from torture than the other.
Reincarnation! Really! You need to resort to bringing that crazy idea into this? (Not your reply, but it’s the kind of reaction that such an idea is likely to generate). But this is an important point—the idea that reincarnation can occur is more rational than the alternatives. If the universe is virtual, reincarnation is easy and you can be made to live as any sentient player. But if it isn’t, and if there’s no God waiting to scoop you up into his lair, what happens to the thing (or things) inside you that is sentient? Does it magically disappear and turn into nothing? Did it magically pop into existence out of nothing in the first place? Those are mainstream atheist religious beliefs. In nature, there isn’t anything that can be created or destroyed other than building and breaking up composite objects. If a sentience is a compound object which can be made to suffer without any of its components suffering, that’s magic too. If the thing that suffers is something that emerges out of complexity without any of the components suffering, again that’s magic. If there is sentience (feelings), there is a sentience to experience those feelings, and it isn’t easy to destroy it—that takes magic, and we shouldn’t be using magic as mechanisms in our thinking. The sentience in that rat could quite reasonably be someone you love, or someone you loved in a past life long ago. It would be a serious error not to regard all sentiences as having equal value unless you have proof that some of them are lesser things, but you don’t have that.
You’ve also opened the door to “superior” aliens deciding that the sentience in you isn’t equivalent to the sentiences in them, which allows them to tread you in less moral ways by applying your own standards.
“As you see, your answer differs from mine. That makes it wrong (by my standards—which are the ones that matter to me, of course).”
And yet one of the answers is actually right, while the other isn’t. Which one of us will AGI judge to have the better argument for this? This kind of dispute will be settled by AGI’s intelligence quite independently of any morality rules that it might end up running. The best arguments will always win out, and I’m confident that I’ll be the one winning this argument when we have unbiased AGI weighing things up.
″ “and show me a proposed system of morality that makes different judgements from mine which I can’t show to be defective.” --> Why? For you to be demonstrably wrong, it is not required that anyone or anything else be demonstrably right. If you say that 2 and 2 make 5, you are wrong even if no one present can come up with the right answer about what 2 and 2 actually make—whatever it is, it sure ain’t 5!”
If you can show me an alternative morality which isn’t flawed and which produces different answers from mine when crunching the exact same data, one of them will be wrong, and that would provide a clear point at which close examination would lead to one of those systems being rejected.
The right answers are the ones dictated by the weighing up of harm based on the available information (which includes the harm ratings in the database of knowledge of sentience).
I disagree. I reject your standard of correctness. (As do many other people.)
The question of whether there is an objective standard of correctness for moral judgments, is the domain of metaethics. If you have not encountered this field before now, I strongly suggest that you investigate it in detail; there is a great deal of material there, which is relevant to this discussion.
(I will avoid commenting on the reincarnation-related parts of your comment, even though they do form the bulk of what you’ve written. All of that is, of course, nonsense, but there’s no need whatever to rehash, in this thread, the arguments for why it is nonsense. I will only suggest that you read the sequences; much of the material therein is targeted at precisely this sort of topic, and this sort of viewpoint.)
“I disagree. I reject your standard of correctness. (As do many other people.)”
Shingles is worse than a cold. I haven’t had it, but those who have will tell you how bad the pain is. We can collect data on suffering by asking people how bad things feel in comparison to other things, and this is precisely what AGI will set about doing in order to build its database and make its judgements more and more accurate. If you have the money to alleviate the suffering of one person out of a group suffering from a variety of painful conditions and all you know about them is which condition they have just acquired, you can use the data in that database to work out which one you should help. That is morality being applied, and it’s the best way of doing it—any other answer is immoral. Of course, if we know more about these people, such as how good or bad they are, that might change the result, but again there would be data that can be crunched to work out how much suffering their past actions caused to undeserving others. There is a clear mechanism for doing this, and not doing it that way using the available information is immoral.
“The question of whether there is an objective standard of correctness for moral judgments, is the domain of metaethics.”
We already have what we need—a pragmatic system for getting as close to the ideal morality as possible based on collecting the data as to how harmful different experiences are. The data will never be complete, they will never be fully accurate, but they are the best that can be done and we have a moral duty to compile and use them.
“(I will avoid commenting on the reincarnation-related parts of your comment, even though they do form the bulk of what you’ve written. All of that is, of course, nonsense...”
If you reject that, you are doing so in favour of magical thinking, and AGI won’t be impressed with that. The idea that the sentience in you can’t go on to become a sentience in a maggot is based on the idea that after death that sentience magically becomes nothing. I am fully aware that most people are magical thinkers, so you will always feel that you are right on the basis that hordes of fellow magical thinkers back up your magical beliefs, but you are being irrational. AGI is not going to be programmed to be irrational in the same way most humans are. The job of AGI is to model reality in the least magical way it can, and having things pop into existence out of nothing and then return to being nothing is more magical than having things continue to exist in the normal way that things in physics behave. (All those virtual particles that pop in and out of existence in the vacuum, they emerge from a “nothing” that isn’t nothing—it has properties such as a rule that whatever’s taken from it must have the same amount handed back.) Religious people have magical beliefs too and they too make the mistake of thinking that numbers of supporters are evidence that their beliefs are right, but being right is not democratic. Being right depends squarely on being right. Again here, we don’t have absolute right answers in one sense, but we do have in terms of what is probably right, and an idea that depends on less magic (and more rational mechanism) is more likely to be right. You have made a fundamental mistake here by rejecting a sound idea on the basis of a bias in your model of reality that has led to you miscategorising it as nonsense, while your evidence for it being nonsense is support by a crowd of people who haven’t bothered to think it through.
I have never sidestepped anything. The right answers are the ones dictated by the weighing up of harm based on the available information (which includes the harm ratings in the database of knowledge of sentience). If the harm from one choice has a higher weight than another choice, that other choice is more moral.
How do you know that? Why should anyone care about this definition? These are questions which you have definitely sidestepped.
And yet one of the answers is actually right, while the other isn’t.
Is 2+2 equal to 5 or to fish?
Which one of us will AGI judge to have the better argument for this? This kind of dispute will be settled by AGI’s intelligence quite independently of any morality rules that it might end up running. The best arguments will always win out, and I’m confident that I’ll be the one winning this argument when we have unbiased AGI weighing things up.
What is this “unbiased AGI” who makes moral judgments on the basis of intelligence alone? This is nonsense—moral “truths” are not the same as physical or logical truths. They are fundamentally subjective, similar to definitions. You cannot have an “unbiased morality” because there is no objective moral reality to test claims against.
If you can show me an alternative morality which isn’t flawed and which produces different answers from mine when crunching the exact same data, one of them will be wrong, and that would provide a clear point at which close examination would lead to one of those systems being rejected.
You should really read up on the Orthogonality Thesis and related concepts. Also, how do you plan on distinguishing between right and wrong moralities?
“How do you know that? Why should anyone care about this definition? These are questions which you have definitely sidestepped.”
People should care about it because it always works. If anyone wants to take issue with that, all they have to do is show a situation where it fails. All examples confirm that it works.
“Is 2+2 equal to 5 or to fish?”
Neither of those results works, but neither of them is my answer.
“What is this “unbiased AGI” who makes moral judgments on the basis of intelligence alone? This is nonsense—moral “truths” are not the same as physical or logical truths. They are fundamentally subjective, similar to definitions. You cannot have an “unbiased morality” because there is no objective moral reality to test claims against.”
You’ve taken that out of context—I made no claim about it making moral judgements on the basis of intelligence alone. That bit about using intelligence alone was referring to a specific argument that doesn’t relate directly to morality.
“You should really read up on the Orthogonality Thesis and related concepts.”
Thanks - all such pointers are welcome.
“Also, how do you plan on distinguishing between right and wrong moralities?”
First by recognising what morality is for. If there was no suffering, there would be no need for morality as it would be impossible to harm anyone. In a world of non-sentient robots, they can do what they like to each other without it being wrong as no harm is ever done. Once you’ve understood that and get the idea of what morality is about (i.e. harm management), then you have to think about how harm management should be applied. The sentient things that morality protects are prepared to accept being harmed if it’s a necessary part of accessing pleasure where that pleasure will likely outweigh the harm, but they don’t like being harmed in ways that don’t improve their access to pleasure. They don’t like being harmed by each other for insufficient gains. They use their intelligence to work out that some things are fair and some things aren’t, and what determines fairness is whether the harm they suffer is likely to lead to overall gains for them or not. In the more complex cases, one individual can suffer in order for another individual to gain enough to make that suffering worthwhile, but only if the system shares out the suffering such that they all take turns in being the ones who suffer and the ones who gain. They recognise that if the same individual always suffers while others always gain, that isn’t fair, and they know it isn’t fair simply by imagining it happening that way to them. The rules of morality come out of this process of rational thinking about harm management—it isn’t some magic thing that we can’t understand. To maximise fairness, that suffering which opens the way to pleasure should be shared out as equally as possible, and so should access to the pleasures. The method of imagining that you are all of the individuals and seeking a means of distribution of suffering and pleasure that will satisfy you as all of them would automatically provide the right answers if full information was available. Because full information isn’t available, all we can do is calculate the distribution that’s most likely to be fair on that same basis using the information that is actually available. With incorrect moralities, some individuals are harmed for other’s gains without proper redistribution to share the harm and pleasure around evenly. It’s just maths.
People should care about it because it always works. If anyone wants to take issue with that, all they have to do is show a situation where it fails. All examples confirm that it works.
What do you mean, it works? I agree that it matches our existing preconceptions and intuitions about morality better than the average random moral system, but I don’t think that that comparison is a useful way of getting to truth and meaningful categories.
Neither of those results works, but neither of them is my answer.
I’ll stop presenting you with poorly-carried-out Zen koans and be direct. You have constructed a false dilemma. It is quite possible for both of you to be wrong.
You’ve taken that out of context—I made no claim about it making moral judgements on the basis of intelligence alone. That bit about using intelligence alone was referring to a specific argument that doesn’t relate directly to morality.
“All sentiences are equally important” is definitely a moral statement.
First by recognising what morality is for. If there was no suffering, there would be no need for morality as it would be impossible to harm anyone. In a world of non-sentient robots, they can do what they like to each other without it being wrong as no harm is ever done. Once you’ve understood that and get the idea of what morality is about (i.e. harm management), then you have to think about how harm management should be applied. The sentient things that morality protects are prepared to accept being harmed if it’s a necessary part of accessing pleasure where that pleasure will likely outweigh the harm, but they don’t like being harmed in ways that don’t improve their access to pleasure. They don’t like being harmed by each other for insufficient gains. They use their intelligence to work out that some things are fair and some things aren’t, and what determines fairness is whether the harm they suffer is likely to lead to overall gains for them or not. In the more complex cases, one individual can suffer in order for another individual to gain enough to make that suffering worthwhile, but only if the system shares out the suffering such that they all take turns in being the ones who suffer and the ones who gain. They recognise that if the same individual always suffers while others always gain, that isn’t fair, and they know it isn’t fair simply by imagining it happening that way to them. The rules of morality come out of this process of rational thinking about harm management—it isn’t some magic thing that we can’t understand. To maximise fairness, that suffering which opens the way to pleasure should be shared out as equally as possible, and so should access to the pleasures. The method of imagining that you are all of the individuals and seeking a means of distribution of suffering and pleasure that will satisfy you as all of them would automatically provide the right answers if full information was available. Because full information isn’t available, all we can do is calculate the distribution that’s most likely to be fair on that same basis using the information that is actually available.
I think that this is a fine (read: “quite good”; an archaic meaning) definition of morality-in-practice, but there are a few issues with your meta-ethics and surrounding parts. First, it is not trivial to define what beings are sentient and what counts as suffering (and how much). Second, if your morality flows entirely from logic, then all of the disagreement or possibility for being incorrect is inside “you did the logic incorrectly,” and I’m not sure that your method of testing moral theories takes that possibility into account.
With incorrect moralities, some individuals are harmed for other’s gains without proper redistribution to share the harm and pleasure around evenly. It’s just maths.
I agree that it is mathematics, but where is this “proper” coming from? Could somebody disagree about whether, say, it is moral to harm somebody as retributive justice? Then the equations need our value system as input, and the results are no longer entirely objective. I agree that “what maximizes X?” is objective, though.
“What do you mean, it works? I agree that it matches our existing preconceptions and intuitions about morality better than the average random moral system, but I don’t think that that comparison is a useful way of getting to truth and meaningful categories.”
It works beautifully. People have claimed it’s wrong, but they can’t point to any evidence for that. We urgently need a system for governing how AGI calculates morality, and I’ve proposed a way of doing so. I came here to see what your best system is, but you don’t appear to have made any selection at all—there is no league table of best proposed solutions, and there are no league tables for each entry listing the worst problems with them. I’ve waded through a lot of stuff and have found that the biggest objection to utilitarianism is a false paradox. Why should you be taken seriously at all when you’ve failed to find that out for yourselves?
“You have constructed a false dilemma. It is quite possible for both of you to be wrong.”
If you trace this back to the argument in question, it’s about equal amounts of suffering being equally bad for sentiences in different species. If they are equal amounts, they are necessarily equally bad—if they weren’t, they wouldn’t have equal values.
″ “You’ve taken that out of context—I made no claim about it making moral judgements on the basis of intelligence alone. That bit about using intelligence alone was referring to a specific argument that doesn’t relate directly to morality.” --> ” “All sentiences are equally important” is definitely a moral statement.”
Again you’re trawling up something that my statement about using intelligence alone for was not referring to.
“First, it is not trivial to define what beings are sentient and what counts as suffering (and how much).”
That doesn’t matter—we can still aim to do the job as well as it can be done based on the knowledge that is available, and the odds are that that will be better than not attempting to do so.
″ Second, if your morality flows entirely from logic, then all of the disagreement or possibility for being incorrect is inside “you did the logic incorrectly,” and I’m not sure that your method of testing moral theories takes that possibility into account.”
It will be possible with AGI to have it run multiple models of morality and to show up the differences between them and to prove that it is doing the logic correctly. At that point, it will be easier to reveal the real faults rather than imaginary ones. But it would be better if we could prime AGI with the best candidate first, before it has the opportunity to start offering advice to powerful people.
“I agree that it is mathematics, but where is this “proper” coming from?”
Proper simply means correct—fair share where everyone gets the same amount of reward for the same amount of suffering.
“Could somebody disagree about whether, say, it is moral to harm somebody as retributive justice? Then the equations need our value system as input, and the results are no longer entirely objective.”
Retributive justice is inherently a bad idea because there’s no such thing as free will—bad people are not to blame for being the way they are. However, there is a need to deter others( and to discourage repeat behaviour by the same individual if they’re ever to be released into the wild again), so plenty of harm will typically be on the agenda anyway if the calculation is that this will reduce harm.
But this is an important point—the idea that reincarnation can occur is more rational than the alternatives. If the universe is virtual, reincarnation is easy and you can be made to live as any sentient player.
What does it mean to be somebody else? It seems like you have the intuition of an non-physical Identity Ball which can be moved from body to body, but consider this: the words that you type, the thoughts in your head, all of these are purely physical processes. If your Identity Ball were removed or replaced, there would be no observable change, even from within—because noticing something requires a physical change in the brain corresponding to the thought occurring within your mind. A theory of identity that better meshes with reality is that of functionalism, which in T-shirt slogan form is “the mind is a physical process, and the particulars of that process determine identity.”
For more on this, I recommend Yudkowsky’s writings on consciousness, particularly Zombies! Zombies?,
But if it isn’t, and if there’s no God waiting to scoop you up into his lair, what happens to the thing (or things) inside you that is sentient? Does it magically disappear and turn into nothing? Did it magically pop into existence out of nothing in the first place? Those are mainstream atheist religious beliefs.
This Proves Too Much—you could say the same of any joint property. When I deconstruct a chair, where does the chairness go? Surely it cannot just disappear—that would violate the Conservation of Higher-Order Properties which you claim exists.
In nature, there isn’t anything that can be created or destroyed other than building and breaking up composite objects.
Almost everything we care about is composite, so this is an odd way of putting it, but yes.
If a sentience is a compound object which can be made to suffer without any of its components suffering, that’s magic too.
One need not carry out nuclear fission to deconstruct a chair.
If the thing that suffers is something that emerges out of complexity without any of the components suffering, again that’s magic.
The definition of complex systems, one might say, is that they have properties beyond the properties of the individual components that make them up.
If there is sentience (feelings), there is a sentience to experience those feelings, and it isn’t easy to destroy it—that takes magic, and we shouldn’t be using magic as mechanisms in our thinking.
Why should it require magic for physical processes to move an object out of a highly unnatural chunk of object-space that we define as “a living human being”? Life and intelligence are fragile, as are most meaningful categories. It is resilience which requires additional explanation.
“What does it mean to be somebody else? It seems like you have the intuition of an non-physical Identity Ball which can be moved from body to body,”
The self is nothing more than the sentience (the thing that is sentient). Science has no answers on this at all at the moment, so it’s a difficult thing to explore, but if there is suffering, there must be a sufferer, and that sufferer cannot just be complexity—it has to have some physical reality.
“but consider this: the words that you type, the thoughts in your head, all of these are purely physical processes.”
In an AGI system those are present too, but sentience needn’t be. Sentience is something else. We are not our thoughts or memories.
“If your Identity Ball were removed or replaced, there would be no observable change, even from within—because noticing something requires a physical change in the brain corresponding to the thought occurring within your mind.”
There is no guarantee that the sentience in you is the same one from moment to moment—our actual time spent as the sentience in a brain may be fleeting. Alternatively, there may be millions of sentiences in there which all feel the same things, all feeling as if they are the person in which they exist.
“For more on this, I recommend Yudkowsky’s writings on consciousness, particularly Zombies! Zombies?,”
Thanks—I’ll take a look at that too.
“This Proves Too Much—you could say the same of any joint property. When I deconstruct a chair, where does the chairness go? Surely it cannot just disappear—that would violate the Conservation of Higher-Order Properties which you claim exists.”
Can you make the “chairness” suffer? No. Can you make the sentience suffer? If it exists at all, yes. Can that sentience evaporate into nothing when you break up a brain in the say that the “chairness” disappears when you break up a chair? No. They are radically different kinds of thing. Believing that a sentience can emerge out of nothing to suffer and then disappear back into nothing is a magical belief. The “chairness” of a chair, by way of contrast, is made of nothing—it is something projected onto the chair by imagination.
“”If a sentience is a compound object which can be made to suffer without any of its components suffering, that’s magic too.”″ --> “One need not carry out nuclear fission to deconstruct a chair.”
Relevance?
“The definition of complex systems, one might say, is that they have properties beyond the properties of the individual components that make them up.”
Nothing is ever more than the sum of its parts (including any medium on which it depends). Complex systems can reveal hidden aspects of their components, but those aspects are always there. In a universe with only one electron and nothing else at all, the property of the electron that repels it from another electron is a hidden property, but it’s already there—it doesn’t suddenly ping into being when another electron is added to the universe and brought together with the first one.
“Why should it require magic for physical processes to move an object out of a highly unnatural chunk of object-space that we define as “a living human being”? Life and intelligence are fragile, as are most meaningful categories. It isresilience which requires additional explanation.”
What requires magic is for the sentient thing in us to stop existing when a person dies. What is the thing that suffers? Is it a plurality? Is it a geometrical arrangement? Is it a pattern of activity? How would any of those suffer? My wallpaper has a pattern, but I can’t torture that pattern. My computer can run software that does intelligent things, but I can’t torture that software or the running of that software. Without a physical sufferer, there can be no suffering.
The self is nothing more than the sentience (the thing that is sentient). Science has no answers on this at all at the moment, so it’s a difficult thing to explore
How do you know it exists, if science knows nothing about it?
but if there is suffering, there must be a sufferer, and that sufferer cannot just be complexity—it has to have some physical reality.
This same argument applies just as well to any distributed property. I agree that intelligence/sentience/etc. does not arise from complexity alone, but it is a distributed process and you will not find a single atom of Consciousness anywhere in your brain.
In an AGI system those are present too, but sentience needn’t be. Sentience is something else. We are not our thoughts or memories.
Is your sentience in any way connected to what you say? Then sentience must either be a physical process, or capable of reaching in and pushing around atoms to make your neurons fire to make your lips say something. The latter is far more unlikely and not supported by any evidence. Perhaps you are not your thoughts and memories alone, but what else is there for “you” to be made of?
There is no guarantee that the sentience in you is the same one from moment to moment—our actual time spent as the sentience in a brain may be fleeting. Alternatively, there may be millions of sentiences in there which all feel the same things, all feeling as if they are the person in which they exist.
So the Sentiences are truly epiphenomenonological, then? (They have no causal effect on physical reality?) Then how can they be said to exist? Regardless of the Deep Philosophical Issues, how could you have any evidence of their existence, or what they are like?
Can you make the “chairness” suffer? No. Can you make the sentience suffer? If it exists at all, yes. Can that sentience evaporate into nothing when you break up a brain in the say that the “chairness” disappears when you break up a chair? No. They are radically different kinds of thing. Believing that a sentience can emerge out of nothing to suffer and then disappear back into nothing is a magical belief. The “chairness” of a chair, by way of contrast, is made of nothing—it is something projected onto the chair by imagination.
They are both categories of things. The category that you happen to place yourself in is not inherently, a priori, a Fundamentally Real Category. And even if it were a Fundamentally Real Category, that does not mean that the quantity of members of that Category is necessarily conserved over time, that members cannot join and leave as time goes on.
Relevance?
It’s the same analogy as before—just as you don’t need to split a chair’s atoms to split the chair itself, you don’t need to make a brain’s atoms suffer to make it suffer.
Nothing is ever more than the sum of its parts (including any medium on which it depends). Complex systems can reveal hidden aspects of their components, but those aspects are always there.
How do you know that? And how can this survive contact with reality, where in practice we call things “chairs” even if there is no chair-ness in its atoms?
In a universe with only one electron and nothing else at all, the property of the electron that repels it from another electron is a hidden property, but it’s already there—it doesn’t suddenly ping into being when another electron is added to the universe and brought together with the first one.
But the capability of an arrangement of atoms to compute 2+2 is not inside the atoms themselves. And anyway, this supposed “hidden property” is nothing more than the fact that the electron produces an electric field pointed toward it. Repelling-each-other is a behavior that two electrons do because of this electric field, and there’s no inherent “repelling electrons” property inside the electron itself.
What requires magic is for the sentient thing in us to stop existing when a person dies.
But it’s not a thing! It’s not an object, it’s a process, and there’s no reason to expect the process to keep going somewhere else when its physical substrate fails.
What is the thing that suffers? Is it a plurality? Is it a geometrical arrangement? Is it a pattern of activity?
I’ll go with the last one.
How would any of those suffer? My wallpaper has a pattern, but I can’t torture that pattern.
Taking the converse does not preserve truth. All cats are mammals but not all mammals are cats.
My computer can run software that does intelligent things, but I can’t torture that software or the running of that software.
You could torture the software, if it were self-aware and had a utility function.
Without a physical sufferer, there can be no suffering.
But—where is the physical sufferer inside you?
You have pointed to several non-suffering patterns, but you could just as easily do the same if sentience was a process but an uncommon one. (Bayes!) And to go about this rationally, we would look at the differences between a brain and wallpaper—and since we haven’t observed any Consciousness Ball inside a brain, there’d be no reason to suppose that the difference is this unobservable Consciousness Ball which must be in the brain but not the wallpaper, explaining their difference. There is already an explanation. There is no need to invoke the unobservable.
“How do you know it exists, if science knows nothing about it?”
All science has to go on is the data that people produce which makes claims about sentience, but that data can’t necessarily be trusted. Beyond that, all we have is internal belief that the feelings we imagine we experience are real because they feel real, and it’s hard to see how we could be fooled if we don’t exist to be fooled. But an AGI scientist won’t be satisfied by our claims—it could write off the whole idea as the ramblings of natural general stupidity systems.
“This same argument applies just as well to any distributed property. I agree that intelligence/sentience/etc. does not arise from complexity alone, but it is a distributed process and you will not find a single atom of Consciousness anywhere in your brain.”
That isn’t good enough. If pain is experienced by something, that something cannot be in a compound of any kind with none of the components feeling any of it. A distribution cannot suffer.
“Is your sentience in any way connected to what you say?”
It’s completely tied to what I say. The main problem is that other people tend to misinterpret what they read by mixing other ideas into it as a short cut to understanding.
“Then sentience must either be a physical process, or capable of reaching in and pushing around atoms to make your neurons fire to make your lips say something. The latter is far more unlikely and not supported by any evidence. Perhaps you are not your thoughts and memories alone, but what else is there for “you” to be made of?”
Focus on the data generation. It takes physical processes to drive that generation, and rules are being applied in the data system to do this with each part of that process being governed by physical processes. For data to be produced that makes claims about experiences of pain, a rational process with causes and effects at every step has to run through. If the “pain” is nothing more than assertions that the data system is programmed to churn out without looking for proof of the existence of pain, there is no reason to take those assertions at face value, but if they are true, they have to fit into the cause-and-effect chain of mechanism somewhere—they have to be involved in a physical interaction, because without it, they cannot have a role in generating the data that supposedly tells us about them.
“So the Sentiences are truly epiphenomenonological, then? (They have no causal effect on physical reality?) Then how can they be said to exist? Regardless of the Deep Philosophical Issues, how could you have any evidence of their existence, or what they are like?”
Repeatedly switching the sentient thing wouldn’t remove its causal role, and nor would having more than one sentience all acting at once—they could collectively have an input even if they aren’t all “voting the same way”, and they aren’t going to find out if they got their wish or not because they’ll be loaded with a feeling of satisfaction that they “won the vote” even if they didn’t, and they won’t remember which way they “voted” or what they were even “voting” on.
“They are both categories of things.”
“Chairness” is quite unlike sentience. “Chairness” is an imagined property, whereas sentience is an experience of a feeling.
“It’s the same analogy as before—just as you don’t need to split a chair’s atoms to split the chair itself, you don’t need to make a brain’s atoms suffer to make it suffer.”
You can damage a chair with an axe without breaking every bond, but some bonds will be broken. You can’t split it without breaking any bonds. Most of the chair is not broken (unless you’ve broken most of the bonds). For suffering in a brain, it isn’t necessarily atoms that suffer, but if the suffering is real, something must suffer, and if it isn’t the atoms, it must be something else. It isn’t good enough to say that it’s a plurality of atoms or an arrangement of atoms that suffers without any of the atoms feeling anything, because you’ve failed to identify the sufferer. No arrangement of non-suffering components can provide everything that’s required to support suffering.
″ “Nothing is ever more than the sum of its parts (including any medium on which it depends). Complex systems can reveal hidden aspects of their components, but those aspects are always there.” --> How do you know that? And how can this survive contact with reality, where in practice we call things “chairs” even if there is no chair-ness in its atoms?”
“Chair” is a label representing a compound object. Calling it a chair doesn’t magically make it more than the sum of its parts. Chairs provide two services—one that they support a person sitting on them, and the other that they support someone’s back leaning against it. That is what a chair is. You can make a chair in many ways, such as by cutting out a cuboid of rock from a cliff face. You could potentially make a chair using force fields. “Chairness” is a compound property which refers to the functionalities of a chair. (Some kinds of “chairness” could also refer to other aspects of some chairs, such as their common shapes, but they are not universal.) The fundamental functionalities of chairs are found in the forces between the component atoms. The forces are present in a single atom even when it has no other atom to interact with. There is never a case where anything is more than the sum of its parts—any proposed example of such a thing is wrong.
Is there an example of something being more than the sum of its parts there? If so, why don’t we go directly to that. Give me your best example of this magical phenomenon.
“But the capability of an arrangement of atoms to compute 2+2 is not inside the atoms themselves. And anyway, this supposed “hidden property” is nothing more than the fact that the electron produces an electric field pointed toward it. Repelling-each-other is a behavior that two electrons do because of this electric field, and there’s no inherent “repelling electrons” property inside the electron itself.”
In both cases, you’re using compound properties where they are built up of component properties, and then you’re wrongly considering your compound properties to be fundamental ones.
“But it’s not a thing! It’s not an object, it’s a process, and there’s no reason to expect the process to keep going somewhere else when its physical substrate fails.”
You can’t make a process suffer.
“Taking the converse does not preserve truth. All cats are mammals but not all mammals are cats.”
Claiming that a pattern can suffer is a way-out claim. Maybe the universe is that weird though, but it’s worth spelling out clearly what it is you’re attributing sentience to. If you’re happy with the idea of a pattern experiencing pain, then patterns become remarkable things. (I’d rather look for something of more substance rather than a mere arrangement, but it leaves us both with the bigger problem of how that sentience can make its existence known to a data system.)
“You could torture the software, if it were self-aware and had a utility function.”
Torturing software is like trying to torture the text in an ebook.
“But—where is the physical sufferer inside you?”
That’s what I want to know.
“You have pointed to several non-suffering patterns, but you could just as easily do the same if sentience was a process but an uncommon one. (Bayes!)”
Do you seriously imagine that there’s any magic pattern that can feel pain, such as a pattern of activity where none of the component actions feel anything?
“There is already an explanation. There is no need to invoke the unobservable.”
If you can’t identify anything that’s suffering, you don’t have an explanation, and if you can’t identify how your imagined-to-be-suffering process or pattern is transmitting knowledge of that suffering to the processes that build the data that documents the experience of suffering, again you don’t have an explanation.
″ You seem to want to sidestep the question of “just what are the right answers to questions of morality and metaethics?”. I submit to you that this is, in fact, the critical question.”
I have never sidestepped anything. The right answers are the ones dictated by the weighing up of harm based on the available information (which includes the harm ratings in the database of knowledge of sentience). If the harm from one choice has a higher weight than another choice, that other choice is more moral. (We all have such a database in our heads, but each contains different data and can apply different weightings to the same things, leading to disagreements between us about what’s moral, but AGI will over time generate its own database which will end up being much more accurate than any of ours.)
“And have you managed to convince anyone that your ideas are correct?”
I’ve found a mixture of people who think it’s right and others who say it’s wrong and who point me towards alternatives which are demonstrably faulty.
“They are inferior because they get the wrong answers.”
Well, that’s what we need to explore, and we need to take it to a point where it isn’t just a battle of assertions and counter-assertions.
“I can easily show that your approach generates wrong answers. Observe: You say that “we have to stand by the principle that all sentiences are equally important”. But I don’t agree; I don’t stand by that principle, nor is there any reason for me to do so, as it is counter to my values.”
This may needs a new blog post to explore it fully, but I’ll try to provide a short version here. If a favourite relative of yours was to die and be reincarnated as a rat, you would, if you’re rational, want to treat that rat well if you knew who it used to be. You wouldn’t regard that rat as an inferior kind of thing that doesn’t deserve protection from people who might seek to make it suffer. It wouldn’t matter that your reincarnated relative has no recollection of their previous life—they would matter to you as much in that form as they would if they had a stroke and were reduced to similar capability to a rat and had lot all memory of who they were. The two things are equivalent and it’s irrational to consider one of them as being in less need of protection from torture than the other.
Reincarnation! Really! You need to resort to bringing that crazy idea into this? (Not your reply, but it’s the kind of reaction that such an idea is likely to generate). But this is an important point—the idea that reincarnation can occur is more rational than the alternatives. If the universe is virtual, reincarnation is easy and you can be made to live as any sentient player. But if it isn’t, and if there’s no God waiting to scoop you up into his lair, what happens to the thing (or things) inside you that is sentient? Does it magically disappear and turn into nothing? Did it magically pop into existence out of nothing in the first place? Those are mainstream atheist religious beliefs. In nature, there isn’t anything that can be created or destroyed other than building and breaking up composite objects. If a sentience is a compound object which can be made to suffer without any of its components suffering, that’s magic too. If the thing that suffers is something that emerges out of complexity without any of the components suffering, again that’s magic. If there is sentience (feelings), there is a sentience to experience those feelings, and it isn’t easy to destroy it—that takes magic, and we shouldn’t be using magic as mechanisms in our thinking. The sentience in that rat could quite reasonably be someone you love, or someone you loved in a past life long ago. It would be a serious error not to regard all sentiences as having equal value unless you have proof that some of them are lesser things, but you don’t have that.
You’ve also opened the door to “superior” aliens deciding that the sentience in you isn’t equivalent to the sentiences in them, which allows them to tread you in less moral ways by applying your own standards.
“As you see, your answer differs from mine. That makes it wrong (by my standards—which are the ones that matter to me, of course).”
And yet one of the answers is actually right, while the other isn’t. Which one of us will AGI judge to have the better argument for this? This kind of dispute will be settled by AGI’s intelligence quite independently of any morality rules that it might end up running. The best arguments will always win out, and I’m confident that I’ll be the one winning this argument when we have unbiased AGI weighing things up.
″ “and show me a proposed system of morality that makes different judgements from mine which I can’t show to be defective.” --> Why? For you to be demonstrably wrong, it is not required that anyone or anything else be demonstrably right. If you say that 2 and 2 make 5, you are wrong even if no one present can come up with the right answer about what 2 and 2 actually make—whatever it is, it sure ain’t 5!”
If you can show me an alternative morality which isn’t flawed and which produces different answers from mine when crunching the exact same data, one of them will be wrong, and that would provide a clear point at which close examination would lead to one of those systems being rejected.
I disagree. I reject your standard of correctness. (As do many other people.)
The question of whether there is an objective standard of correctness for moral judgments, is the domain of metaethics. If you have not encountered this field before now, I strongly suggest that you investigate it in detail; there is a great deal of material there, which is relevant to this discussion.
(I will avoid commenting on the reincarnation-related parts of your comment, even though they do form the bulk of what you’ve written. All of that is, of course, nonsense, but there’s no need whatever to rehash, in this thread, the arguments for why it is nonsense. I will only suggest that you read the sequences; much of the material therein is targeted at precisely this sort of topic, and this sort of viewpoint.)
“I disagree. I reject your standard of correctness. (As do many other people.)”
Shingles is worse than a cold. I haven’t had it, but those who have will tell you how bad the pain is. We can collect data on suffering by asking people how bad things feel in comparison to other things, and this is precisely what AGI will set about doing in order to build its database and make its judgements more and more accurate. If you have the money to alleviate the suffering of one person out of a group suffering from a variety of painful conditions and all you know about them is which condition they have just acquired, you can use the data in that database to work out which one you should help. That is morality being applied, and it’s the best way of doing it—any other answer is immoral. Of course, if we know more about these people, such as how good or bad they are, that might change the result, but again there would be data that can be crunched to work out how much suffering their past actions caused to undeserving others. There is a clear mechanism for doing this, and not doing it that way using the available information is immoral.
“The question of whether there is an objective standard of correctness for moral judgments, is the domain of metaethics.”
We already have what we need—a pragmatic system for getting as close to the ideal morality as possible based on collecting the data as to how harmful different experiences are. The data will never be complete, they will never be fully accurate, but they are the best that can be done and we have a moral duty to compile and use them.
“(I will avoid commenting on the reincarnation-related parts of your comment, even though they do form the bulk of what you’ve written. All of that is, of course, nonsense...”
If you reject that, you are doing so in favour of magical thinking, and AGI won’t be impressed with that. The idea that the sentience in you can’t go on to become a sentience in a maggot is based on the idea that after death that sentience magically becomes nothing. I am fully aware that most people are magical thinkers, so you will always feel that you are right on the basis that hordes of fellow magical thinkers back up your magical beliefs, but you are being irrational. AGI is not going to be programmed to be irrational in the same way most humans are. The job of AGI is to model reality in the least magical way it can, and having things pop into existence out of nothing and then return to being nothing is more magical than having things continue to exist in the normal way that things in physics behave. (All those virtual particles that pop in and out of existence in the vacuum, they emerge from a “nothing” that isn’t nothing—it has properties such as a rule that whatever’s taken from it must have the same amount handed back.) Religious people have magical beliefs too and they too make the mistake of thinking that numbers of supporters are evidence that their beliefs are right, but being right is not democratic. Being right depends squarely on being right. Again here, we don’t have absolute right answers in one sense, but we do have in terms of what is probably right, and an idea that depends on less magic (and more rational mechanism) is more likely to be right. You have made a fundamental mistake here by rejecting a sound idea on the basis of a bias in your model of reality that has led to you miscategorising it as nonsense, while your evidence for it being nonsense is support by a crowd of people who haven’t bothered to think it through.
How do you know that? Why should anyone care about this definition? These are questions which you have definitely sidestepped.
Is 2+2 equal to 5 or to fish?
What is this “unbiased AGI” who makes moral judgments on the basis of intelligence alone? This is nonsense—moral “truths” are not the same as physical or logical truths. They are fundamentally subjective, similar to definitions. You cannot have an “unbiased morality” because there is no objective moral reality to test claims against.
You should really read up on the Orthogonality Thesis and related concepts. Also, how do you plan on distinguishing between right and wrong moralities?
“How do you know that? Why should anyone care about this definition? These are questions which you have definitely sidestepped.”
People should care about it because it always works. If anyone wants to take issue with that, all they have to do is show a situation where it fails. All examples confirm that it works.
“Is 2+2 equal to 5 or to fish?”
Neither of those results works, but neither of them is my answer.
“What is this “unbiased AGI” who makes moral judgments on the basis of intelligence alone? This is nonsense—moral “truths” are not the same as physical or logical truths. They are fundamentally subjective, similar to definitions. You cannot have an “unbiased morality” because there is no objective moral reality to test claims against.”
You’ve taken that out of context—I made no claim about it making moral judgements on the basis of intelligence alone. That bit about using intelligence alone was referring to a specific argument that doesn’t relate directly to morality.
“You should really read up on the Orthogonality Thesis and related concepts.”
Thanks - all such pointers are welcome.
“Also, how do you plan on distinguishing between right and wrong moralities?”
First by recognising what morality is for. If there was no suffering, there would be no need for morality as it would be impossible to harm anyone. In a world of non-sentient robots, they can do what they like to each other without it being wrong as no harm is ever done. Once you’ve understood that and get the idea of what morality is about (i.e. harm management), then you have to think about how harm management should be applied. The sentient things that morality protects are prepared to accept being harmed if it’s a necessary part of accessing pleasure where that pleasure will likely outweigh the harm, but they don’t like being harmed in ways that don’t improve their access to pleasure. They don’t like being harmed by each other for insufficient gains. They use their intelligence to work out that some things are fair and some things aren’t, and what determines fairness is whether the harm they suffer is likely to lead to overall gains for them or not. In the more complex cases, one individual can suffer in order for another individual to gain enough to make that suffering worthwhile, but only if the system shares out the suffering such that they all take turns in being the ones who suffer and the ones who gain. They recognise that if the same individual always suffers while others always gain, that isn’t fair, and they know it isn’t fair simply by imagining it happening that way to them. The rules of morality come out of this process of rational thinking about harm management—it isn’t some magic thing that we can’t understand. To maximise fairness, that suffering which opens the way to pleasure should be shared out as equally as possible, and so should access to the pleasures. The method of imagining that you are all of the individuals and seeking a means of distribution of suffering and pleasure that will satisfy you as all of them would automatically provide the right answers if full information was available. Because full information isn’t available, all we can do is calculate the distribution that’s most likely to be fair on that same basis using the information that is actually available. With incorrect moralities, some individuals are harmed for other’s gains without proper redistribution to share the harm and pleasure around evenly. It’s just maths.
What do you mean, it works? I agree that it matches our existing preconceptions and intuitions about morality better than the average random moral system, but I don’t think that that comparison is a useful way of getting to truth and meaningful categories.
I’ll stop presenting you with poorly-carried-out Zen koans and be direct. You have constructed a false dilemma. It is quite possible for both of you to be wrong.
“All sentiences are equally important” is definitely a moral statement.
I think that this is a fine (read: “quite good”; an archaic meaning) definition of morality-in-practice, but there are a few issues with your meta-ethics and surrounding parts. First, it is not trivial to define what beings are sentient and what counts as suffering (and how much). Second, if your morality flows entirely from logic, then all of the disagreement or possibility for being incorrect is inside “you did the logic incorrectly,” and I’m not sure that your method of testing moral theories takes that possibility into account.
I agree that it is mathematics, but where is this “proper” coming from? Could somebody disagree about whether, say, it is moral to harm somebody as retributive justice? Then the equations need our value system as input, and the results are no longer entirely objective. I agree that “what maximizes X?” is objective, though.
“What do you mean, it works? I agree that it matches our existing preconceptions and intuitions about morality better than the average random moral system, but I don’t think that that comparison is a useful way of getting to truth and meaningful categories.”
It works beautifully. People have claimed it’s wrong, but they can’t point to any evidence for that. We urgently need a system for governing how AGI calculates morality, and I’ve proposed a way of doing so. I came here to see what your best system is, but you don’t appear to have made any selection at all—there is no league table of best proposed solutions, and there are no league tables for each entry listing the worst problems with them. I’ve waded through a lot of stuff and have found that the biggest objection to utilitarianism is a false paradox. Why should you be taken seriously at all when you’ve failed to find that out for yourselves?
“You have constructed a false dilemma. It is quite possible for both of you to be wrong.”
If you trace this back to the argument in question, it’s about equal amounts of suffering being equally bad for sentiences in different species. If they are equal amounts, they are necessarily equally bad—if they weren’t, they wouldn’t have equal values.
″ “You’ve taken that out of context—I made no claim about it making moral judgements on the basis of intelligence alone. That bit about using intelligence alone was referring to a specific argument that doesn’t relate directly to morality.” --> ” “All sentiences are equally important” is definitely a moral statement.”
Again you’re trawling up something that my statement about using intelligence alone for was not referring to.
“First, it is not trivial to define what beings are sentient and what counts as suffering (and how much).”
That doesn’t matter—we can still aim to do the job as well as it can be done based on the knowledge that is available, and the odds are that that will be better than not attempting to do so.
″ Second, if your morality flows entirely from logic, then all of the disagreement or possibility for being incorrect is inside “you did the logic incorrectly,” and I’m not sure that your method of testing moral theories takes that possibility into account.”
It will be possible with AGI to have it run multiple models of morality and to show up the differences between them and to prove that it is doing the logic correctly. At that point, it will be easier to reveal the real faults rather than imaginary ones. But it would be better if we could prime AGI with the best candidate first, before it has the opportunity to start offering advice to powerful people.
“I agree that it is mathematics, but where is this “proper” coming from?”
Proper simply means correct—fair share where everyone gets the same amount of reward for the same amount of suffering.
“Could somebody disagree about whether, say, it is moral to harm somebody as retributive justice? Then the equations need our value system as input, and the results are no longer entirely objective.”
Retributive justice is inherently a bad idea because there’s no such thing as free will—bad people are not to blame for being the way they are. However, there is a need to deter others( and to discourage repeat behaviour by the same individual if they’re ever to be released into the wild again), so plenty of harm will typically be on the agenda anyway if the calculation is that this will reduce harm.
What does it mean to be somebody else? It seems like you have the intuition of an non-physical Identity Ball which can be moved from body to body, but consider this: the words that you type, the thoughts in your head, all of these are purely physical processes. If your Identity Ball were removed or replaced, there would be no observable change, even from within—because noticing something requires a physical change in the brain corresponding to the thought occurring within your mind. A theory of identity that better meshes with reality is that of functionalism, which in T-shirt slogan form is “the mind is a physical process, and the particulars of that process determine identity.”
For more on this, I recommend Yudkowsky’s writings on consciousness, particularly Zombies! Zombies?,
This Proves Too Much—you could say the same of any joint property. When I deconstruct a chair, where does the chairness go? Surely it cannot just disappear—that would violate the Conservation of Higher-Order Properties which you claim exists.
Almost everything we care about is composite, so this is an odd way of putting it, but yes.
One need not carry out nuclear fission to deconstruct a chair.
The definition of complex systems, one might say, is that they have properties beyond the properties of the individual components that make them up.
Why should it require magic for physical processes to move an object out of a highly unnatural chunk of object-space that we define as “a living human being”? Life and intelligence are fragile, as are most meaningful categories. It is resilience which requires additional explanation.
“What does it mean to be somebody else? It seems like you have the intuition of an non-physical Identity Ball which can be moved from body to body,”
The self is nothing more than the sentience (the thing that is sentient). Science has no answers on this at all at the moment, so it’s a difficult thing to explore, but if there is suffering, there must be a sufferer, and that sufferer cannot just be complexity—it has to have some physical reality.
“but consider this: the words that you type, the thoughts in your head, all of these are purely physical processes.”
In an AGI system those are present too, but sentience needn’t be. Sentience is something else. We are not our thoughts or memories.
“If your Identity Ball were removed or replaced, there would be no observable change, even from within—because noticing something requires a physical change in the brain corresponding to the thought occurring within your mind.”
There is no guarantee that the sentience in you is the same one from moment to moment—our actual time spent as the sentience in a brain may be fleeting. Alternatively, there may be millions of sentiences in there which all feel the same things, all feeling as if they are the person in which they exist.
“For more on this, I recommend Yudkowsky’s writings on consciousness, particularly Zombies! Zombies?,”
Thanks—I’ll take a look at that too.
“This Proves Too Much—you could say the same of any joint property. When I deconstruct a chair, where does the chairness go? Surely it cannot just disappear—that would violate the Conservation of Higher-Order Properties which you claim exists.”
Can you make the “chairness” suffer? No. Can you make the sentience suffer? If it exists at all, yes. Can that sentience evaporate into nothing when you break up a brain in the say that the “chairness” disappears when you break up a chair? No. They are radically different kinds of thing. Believing that a sentience can emerge out of nothing to suffer and then disappear back into nothing is a magical belief. The “chairness” of a chair, by way of contrast, is made of nothing—it is something projected onto the chair by imagination.
“”If a sentience is a compound object which can be made to suffer without any of its components suffering, that’s magic too.”″ --> “One need not carry out nuclear fission to deconstruct a chair.”
Relevance?
“The definition of complex systems, one might say, is that they have properties beyond the properties of the individual components that make them up.”
Nothing is ever more than the sum of its parts (including any medium on which it depends). Complex systems can reveal hidden aspects of their components, but those aspects are always there. In a universe with only one electron and nothing else at all, the property of the electron that repels it from another electron is a hidden property, but it’s already there—it doesn’t suddenly ping into being when another electron is added to the universe and brought together with the first one.
“Why should it require magic for physical processes to move an object out of a highly unnatural chunk of object-space that we define as “a living human being”? Life and intelligence are fragile, as are most meaningful categories. It isresilience which requires additional explanation.”
What requires magic is for the sentient thing in us to stop existing when a person dies. What is the thing that suffers? Is it a plurality? Is it a geometrical arrangement? Is it a pattern of activity? How would any of those suffer? My wallpaper has a pattern, but I can’t torture that pattern. My computer can run software that does intelligent things, but I can’t torture that software or the running of that software. Without a physical sufferer, there can be no suffering.
How do you know it exists, if science knows nothing about it?
This same argument applies just as well to any distributed property. I agree that intelligence/sentience/etc. does not arise from complexity alone, but it is a distributed process and you will not find a single atom of Consciousness anywhere in your brain.
Is your sentience in any way connected to what you say? Then sentience must either be a physical process, or capable of reaching in and pushing around atoms to make your neurons fire to make your lips say something. The latter is far more unlikely and not supported by any evidence. Perhaps you are not your thoughts and memories alone, but what else is there for “you” to be made of?
So the Sentiences are truly epiphenomenonological, then? (They have no causal effect on physical reality?) Then how can they be said to exist? Regardless of the Deep Philosophical Issues, how could you have any evidence of their existence, or what they are like?
They are both categories of things. The category that you happen to place yourself in is not inherently, a priori, a Fundamentally Real Category. And even if it were a Fundamentally Real Category, that does not mean that the quantity of members of that Category is necessarily conserved over time, that members cannot join and leave as time goes on.
It’s the same analogy as before—just as you don’t need to split a chair’s atoms to split the chair itself, you don’t need to make a brain’s atoms suffer to make it suffer.
How do you know that? And how can this survive contact with reality, where in practice we call things “chairs” even if there is no chair-ness in its atoms?
I recommend the Reductionism subsequence.
But the capability of an arrangement of atoms to compute 2+2 is not inside the atoms themselves. And anyway, this supposed “hidden property” is nothing more than the fact that the electron produces an electric field pointed toward it. Repelling-each-other is a behavior that two electrons do because of this electric field, and there’s no inherent “repelling electrons” property inside the electron itself.
But it’s not a thing! It’s not an object, it’s a process, and there’s no reason to expect the process to keep going somewhere else when its physical substrate fails.
I’ll go with the last one.
Taking the converse does not preserve truth. All cats are mammals but not all mammals are cats.
You could torture the software, if it were self-aware and had a utility function.
But—where is the physical sufferer inside you?
You have pointed to several non-suffering patterns, but you could just as easily do the same if sentience was a process but an uncommon one. (Bayes!) And to go about this rationally, we would look at the differences between a brain and wallpaper—and since we haven’t observed any Consciousness Ball inside a brain, there’d be no reason to suppose that the difference is this unobservable Consciousness Ball which must be in the brain but not the wallpaper, explaining their difference. There is already an explanation. There is no need to invoke the unobservable.
“How do you know it exists, if science knows nothing about it?”
All science has to go on is the data that people produce which makes claims about sentience, but that data can’t necessarily be trusted. Beyond that, all we have is internal belief that the feelings we imagine we experience are real because they feel real, and it’s hard to see how we could be fooled if we don’t exist to be fooled. But an AGI scientist won’t be satisfied by our claims—it could write off the whole idea as the ramblings of natural general stupidity systems.
“This same argument applies just as well to any distributed property. I agree that intelligence/sentience/etc. does not arise from complexity alone, but it is a distributed process and you will not find a single atom of Consciousness anywhere in your brain.”
That isn’t good enough. If pain is experienced by something, that something cannot be in a compound of any kind with none of the components feeling any of it. A distribution cannot suffer.
“Is your sentience in any way connected to what you say?”
It’s completely tied to what I say. The main problem is that other people tend to misinterpret what they read by mixing other ideas into it as a short cut to understanding.
“Then sentience must either be a physical process, or capable of reaching in and pushing around atoms to make your neurons fire to make your lips say something. The latter is far more unlikely and not supported by any evidence. Perhaps you are not your thoughts and memories alone, but what else is there for “you” to be made of?”
Focus on the data generation. It takes physical processes to drive that generation, and rules are being applied in the data system to do this with each part of that process being governed by physical processes. For data to be produced that makes claims about experiences of pain, a rational process with causes and effects at every step has to run through. If the “pain” is nothing more than assertions that the data system is programmed to churn out without looking for proof of the existence of pain, there is no reason to take those assertions at face value, but if they are true, they have to fit into the cause-and-effect chain of mechanism somewhere—they have to be involved in a physical interaction, because without it, they cannot have a role in generating the data that supposedly tells us about them.
“So the Sentiences are truly epiphenomenonological, then? (They have no causal effect on physical reality?) Then how can they be said to exist? Regardless of the Deep Philosophical Issues, how could you have any evidence of their existence, or what they are like?”
Repeatedly switching the sentient thing wouldn’t remove its causal role, and nor would having more than one sentience all acting at once—they could collectively have an input even if they aren’t all “voting the same way”, and they aren’t going to find out if they got their wish or not because they’ll be loaded with a feeling of satisfaction that they “won the vote” even if they didn’t, and they won’t remember which way they “voted” or what they were even “voting” on.
“They are both categories of things.”
“Chairness” is quite unlike sentience. “Chairness” is an imagined property, whereas sentience is an experience of a feeling.
“It’s the same analogy as before—just as you don’t need to split a chair’s atoms to split the chair itself, you don’t need to make a brain’s atoms suffer to make it suffer.”
You can damage a chair with an axe without breaking every bond, but some bonds will be broken. You can’t split it without breaking any bonds. Most of the chair is not broken (unless you’ve broken most of the bonds). For suffering in a brain, it isn’t necessarily atoms that suffer, but if the suffering is real, something must suffer, and if it isn’t the atoms, it must be something else. It isn’t good enough to say that it’s a plurality of atoms or an arrangement of atoms that suffers without any of the atoms feeling anything, because you’ve failed to identify the sufferer. No arrangement of non-suffering components can provide everything that’s required to support suffering.
″ “Nothing is ever more than the sum of its parts (including any medium on which it depends). Complex systems can reveal hidden aspects of their components, but those aspects are always there.” --> How do you know that? And how can this survive contact with reality, where in practice we call things “chairs” even if there is no chair-ness in its atoms?”
“Chair” is a label representing a compound object. Calling it a chair doesn’t magically make it more than the sum of its parts. Chairs provide two services—one that they support a person sitting on them, and the other that they support someone’s back leaning against it. That is what a chair is. You can make a chair in many ways, such as by cutting out a cuboid of rock from a cliff face. You could potentially make a chair using force fields. “Chairness” is a compound property which refers to the functionalities of a chair. (Some kinds of “chairness” could also refer to other aspects of some chairs, such as their common shapes, but they are not universal.) The fundamental functionalities of chairs are found in the forces between the component atoms. The forces are present in a single atom even when it has no other atom to interact with. There is never a case where anything is more than the sum of its parts—any proposed example of such a thing is wrong.
“I recommend the Reductionism subsequence.”
Is there an example of something being more than the sum of its parts there? If so, why don’t we go directly to that. Give me your best example of this magical phenomenon.
“But the capability of an arrangement of atoms to compute 2+2 is not inside the atoms themselves. And anyway, this supposed “hidden property” is nothing more than the fact that the electron produces an electric field pointed toward it. Repelling-each-other is a behavior that two electrons do because of this electric field, and there’s no inherent “repelling electrons” property inside the electron itself.”
In both cases, you’re using compound properties where they are built up of component properties, and then you’re wrongly considering your compound properties to be fundamental ones.
“But it’s not a thing! It’s not an object, it’s a process, and there’s no reason to expect the process to keep going somewhere else when its physical substrate fails.”
You can’t make a process suffer.
“Taking the converse does not preserve truth. All cats are mammals but not all mammals are cats.”
Claiming that a pattern can suffer is a way-out claim. Maybe the universe is that weird though, but it’s worth spelling out clearly what it is you’re attributing sentience to. If you’re happy with the idea of a pattern experiencing pain, then patterns become remarkable things. (I’d rather look for something of more substance rather than a mere arrangement, but it leaves us both with the bigger problem of how that sentience can make its existence known to a data system.)
“You could torture the software, if it were self-aware and had a utility function.”
Torturing software is like trying to torture the text in an ebook.
“But—where is the physical sufferer inside you?”
That’s what I want to know.
“You have pointed to several non-suffering patterns, but you could just as easily do the same if sentience was a process but an uncommon one. (Bayes!)”
Do you seriously imagine that there’s any magic pattern that can feel pain, such as a pattern of activity where none of the component actions feel anything?
“There is already an explanation. There is no need to invoke the unobservable.”
If you can’t identify anything that’s suffering, you don’t have an explanation, and if you can’t identify how your imagined-to-be-suffering process or pattern is transmitting knowledge of that suffering to the processes that build the data that documents the experience of suffering, again you don’t have an explanation.