IASP defines pain as “An unpleasant sensory and emotional experience associated with actual or potential tissue damage, or described in terms of such damage.”
I’m not sure that definition can be understood without having experienced pain, or other unpleasant sensations. So if we can’t even have an objective scientific definition of pain, why wouldn’t we be satisfied with “everyone who has experienced suffering knows what it is, and that’s as good a definition we can get with modern science”?
I guess I take it as a general principle of epistemology that things which cannot be defined rigorously in some language, without contradiction, don’t exist?
In any case, I don’t think coming up with a definition of “suffering” is that hard. I rather like my definition. I came up with it while trying to settle ethical questions concerning non-human animals.
Taboo suffering. What’s the bad thing that we want to avoid?
I don’t like it when other humans experience things which are extremely contrary to their preferences (I have altruism).
Humans are not the only class of things for which I experience altruism. Let’s define a class of things towards which I experience altruism as “person-like beings”.
a Being is 1) a type of object 2) which manipulates its surroundings in a pattern 3) which suggests that it has certain goals. The defining property of a Being is intelligence. A Paperclipper is a Being, but not a person...I feel no altruism towards the paperclipper, because while it’s intelligent it is not a Person to me.
When I probe my moral intuition, the Personish-ness of a Being seems to be most strongly related to its degree of preference for all other person-like beings to have their preferences fulfilled. I think this is the only necessary condition for me to feel altruism towards an object, but I’m uncertain. So the defining quality of a Person, then, is Empathy for others, with an intelligence multiplier.
In any case, the Bad Thing we want to prevent is the existence of Person-Beings who are not having their preferences fulfilled.
In other words, “suffering” is when an intelligent and empathetic object does not get its preferences fulfilled.
(Also, yes, I bite the bullet—humans with less empathy are less person-like within the semantic framework constructed above. I think most humans are roughly in the same spectrum of intelligence for moral purposes, but in edge cases i have to bite that bullet as well—though I’m a bit less comfortable with that. The “empathy” weighting seems much more important that the “intelligence” weighting.)
...Admittedly this is very rough and ’m sure you can poke holes in that (for example, the fact that the definition of person-like is self-referential could be exploited), but as per my own moral intuitions it seems roughly accurate. I think that if unstructured idle thought produces something that seems close to correct then, with sufficient thinking and modification we could come up with something that is correct.
Upvoted you back to zero. Let me try to poke a few holes, in good will.
What’s the bad thing that we want to avoid?
If you just look at the behavioural instead of the experiental aspect of suffering, this already eliminates anything that could be normally understood by the word.
I don’t like it when other humans experience things which are extremely contrary to their preferences
Preferences do not depend on the existence suffering, although suffering seems to depend on the existence preferences.
I feel no altruism towards the paperclipper, because while it’s intelligent it is not a Person to me.
Taboo person.
In other words, “suffering” is when an intelligent and empathetic object does not get its preferences fulfilled.
Are you saying that nonempathetic human beings can’t suffer? I find that claim bizarre.
humans with less empathy are less person-like within the semantic framework constructed above.
I’m a human being with less empathy, and I’m ready to protect my preferences, so be careful ;)
I think that if unstructured idle thought produces something that seems close to correct then, with sufficient thinking and modification we could come up with something that is correct.
With the current state of science, I personally don’t think we need to define suffering any more than we need to define colours. Once we know what happens in the brain when a person reports they are suffering, we know what suffering is and how to measure it.
I’m a human being with less empathy, and I’m ready to protect my preferences, so be careful ;)
Just to clarify, I’m talking about the “I don’t care about the suffering of other humans” / sociopathy sort of no-empathy, not the “I have trouble interpreting facial expressions” / autism sort of no-empathy. It’s unfortunate that we use the same word for those. Some psychologists use “Hot empathy (feeling)” and “Cold empathy (perception)” to differentiate.
And, since I can’t look at your brain directly, I wouldn’t actually feel reduced altruism towards you unless you actually did an action which demonstrated callous disregard for other people (self-diagnoses of sociopathy are insufficient evidence that someone actually doesn’t care about other people).
However, if you really are one who doesn’t experience hot empathy, then you aren’t really allowed to be offended by the fact that I feel reduced hot empathy towards you, because that’s just tit for tat. ;)
Although, I don’t actually care about Hot Empathy either . What I care about are your preferences—do you care about others as a (non-instrumental) value? Hot Empathy is where most humans derive their altruistic preferences from, but if you derive altruistic preferences via some other route then that works for me.
Are you saying that nonempathetic human beings can’t suffer? I find that claim bizarre.
Bleh...yeah. It is bizarre. How about we don’t call it “suffering” , and just focus on “bad thing that we want to avoid.” for now.
It seems like humans typically only extend altruism towards things which reciprocate altruism in return. Why are humans more bothered by the suffering of dogs than they are by the suffering of pigs, though the two animals are of comparable intellect? Other than mere familiarity, it’s because the former reciprocates altruism. It’s harder to slaughter something that shows affection towards you.
If you just look at the behavioural instead of the experiental aspect of suffering, this already eliminates anything that could be normally understood by the word.
I have a dream, that one day agents will be judged not by the substrate of their code, but by the behavioral output of whatever algorithm they run.
First of all, not doing so violates the anti-zombie principle, and second of all, if we interact with aliens or AI, I want us to be friends, and I want AI we design to consider them as friends too. So...if you want to define “suffering” to be referring to specific algorithms, I’m comfortable with that...but this discussion really isn’t about suffering, is it? It’s about morality. And morality shouldn’t care what sort of substrate your algorithm runs on, nor should it really care what specific algorithm you use except with regard to its output. (Though I can think of some fun edge cases here if you want to talk about that)
Like the paper said,
we do not (yet) have a good understanding of what suffering is (or rather, what the actual underlying component that is morally relevant, is)
I’m more trying to get at what is morally relevant about suffering, not defining suffering itself. Language is filled with fuzzy categories that dissolve under the application of rigor.
Taboo person.
Okay, so in this context, being classed as a Person means that in addition to intelligence, the following thing is approximately true:
“I care about the preferences of all agents X who have this statement embedded in their algorithm”.
(Yes, this is uncomfortable for me too. I haven’t worked out a non-self-referential version.)
So, if snakes don’t even help each other, they aren’t people at all save for their tiny little spark of intelligence. If bees help each other sometimes, but never other species, they aren’t people at all because the altruism is only directed towards the colony.
When mice altruistically free other rodents from cages, it is a spark personhood...but it’s limited because the mouse will only do this for species who display affective cues which it can understand. It doesn’t have the cognitive capacity to understand emotions in the abstract and apply that knowledge to, say, altruism towards a human or bird. So its not very person-ish...but we certainly wouldn’t torture it.
Dolphins, dogs, elephants, apes, etc...show cross species altruism and a high degree of intelligence. They are very person-ish. We should be really nice to them, in proportion to how person-ish they are.
A Paperclipper is sort of like a hyper-intelligent bee or snake. We don’t really care how it feels. A Friendly SAI, on the other hand, is even more person-ish than a human. We’d never want to violate the preferences of a Friendly SAI. (this is rather tautological, of course)
Part of the problem is that, in order to explain my idea, I took certain words and re-defined them away from their common usage to suite my purposes. I don’t know how to say this using the words we have now though. And the other problem is that it’s sloppy. I haven’t thought through this nearly enough.
But the general direction feels both intuitively comfy when I apply it to animals (that’s the closest thing to a truly alien mind that we have right now) and comes with the bonus of being somewhat pragmatic with its tit-for-tat attitude towards what sorts of beings we should be friendly towards.
I’m not going to be able to adequately answer comments this long in the future, especially because I disagree with the bulk of their content. You’re making a huge number of underlying assumptions you don’t seem to be explicitly stating, and it seems you’re not aware of those assumptions either.
However, if you really are one who doesn’t experience hot empathy, then you aren’t really allowed to be offended by the fact that I feel reduced hot empathy towards you, because that’s just tit for tat. ;)
I think you’re committing the typical mind fallacy here. It seems you have a lot of hot empathy, so because that is the most visible part of your altruistic cognition, you easily think it’s the only one. Some of your thinking seems to be motivated by this.
See my comments in this thread if you’re confused by what I say so that I don’t have to reiterate myself. You said in a later paragraph you care about my preferences and I bet our preferences are pretty similar, despite our emotional life probably being quite different.
(self-diagnoses of sociopathy are insufficient evidence that someone actually doesn’t care about other people).
Psychopathy and sociopathy are much wider concepts than nonempathy. Even these wider concepts don’t imply sadism either. Be careful not confuse them, as that has potential to insult a lot of people.
It seems like humans typically only extend altruism towards things which reciprocate altruism in return.
Could be. Do you find this principle morally sound? Do you propose being altruistic only towards people who can reciprocate it to you? Can that be called altruism?
I have a dream, that one day agents will be judged not by the substrate of their code, but by the behavioral output of whatever algorithm they run.
That’s fine if we have no methods that are more direct. If you knew what kind of computation suffering is, and you can directly find out if someone suffers by scanning their brain, why on earth would you not rather use that?
First of all, not doing so violates the anti-zombie principle
Insisting on visible behavioural output means you don’t care about paralyzed people. I think insisting on visible output is the part that confuses your thinking the most.
So...if you want to define “suffering” to be referring to specific algorithms, I’m comfortable with that...but this discussion really isn’t about suffering, is it? It’s about morality.
You need to have terminal values to talk about morality, and as far as I’m concerned terminal values in human beings are in many situations, not all, determined by their affects, like suffering.
Bleh...yeah. It is bizarre. How about we don’t call it “suffering” , and just focus on “bad thing that we want to avoid.” for now.
Because the bad thing most people want to avoid is suffering, and you’re butchering the concept.
I’m more trying to get at what is morally relevant about suffering, not defining suffering itself. Language is filled with fuzzy categories that dissolve under the application of rigor.
I’ve got no problem with your goal, but I’m sorry, you don’t seem to be applying the rigor. From my POV you’re taking suffering, taking everything that’s important about it, throwing it in the trash can and inventing your own concept that has nothing to do with what people mean when they use the word. Why should I care about this concept you produced from thin air?
“I care about the preferences of all agents X who have this statement embedded in their algorithm”.
All I can say about this is that whether some computation is a person doesn’t affect my altruism towards them whatsoever.
I do care about whether a snake or a bee has the computational equivalent of suffering happening in their brains, because I know from personal experience that suffering sucks, and I want less of it in this universe. I might care about what a paper clipper feels, but that would be dwarfed in importance by everything else that it does.
Affects like suffering are not the only factor when I’m deciding where to extend my altruism either, since my resources are limited.
I think you’re committing the typical mind fallacy here. It seems you have a lot of hot empathy, so because that is the most visible part of your altruistic cognition, you easily think it’s the only one. Some of your thinking seems to be motivated by this.
Mind projection fallacy is when you confuse map with territory and preferences with facts. What I’m doing is assuming other humans are like me—a heuristic which does in fact generally work.
But even so, I did mention:
I don’t actually care about Hot Empathy either . What I care about are your preferences—do you care about others as a (non-instrumental) value? Hot Empathy is where most humans derive their altruistic preferences from, but if you derive altruistic preferences via some other route then that works for me.
Does that amelieorate the criticism?
Even these wider concepts don’t imply sadism either. Be careful not confuse them, as that has potential to insult a lot of people.
Does that mean you are offended? My apologies if so, I should have been more precise with langauge. However, I’m not sure why you think i confused sociopathy (lack of guilt, sympathetic pain) with sadism (pleasure via pain of others). Those two are almost opposites.
Insisting on visible behavioural output means you don’t care about paralyzed people.
Of course not. You still have to use the computation, but morally speaking your interested in the outputs of the computation. In the case of the paralized person, you look at their brain, see what their outputs would be if they were in a different situation, and act accordingly.
The reason we can’t just define suffering as a specific computation present in the brain is because when we are faced with other minds who use different computations to arrive at a roughly same output per input, we won’t recognize them as suffering...unless we define suffering in relation to intput-output in the first place.
For example, most humans compute altruism via interactions between the amygdala and the vmPFC. Now, if someone doesn’t compute altruism that way, but still exhibits altruistic behavior...then isn’t it exactly the same thing? Weren’t you disturbed when you thought that I was presuming to judge a person based on their internal states rather than their behavior previously in this conversation?
We obviously still look at the computation, but the reason we are looking is to figure out what it wishes to output in response to various inputs. That’s what a computation is...a bridge between inputs into outputs.
I’m not sure if I’m explaining this correctly...a computation can’t be intrinsically suffering or intrinsically pleasure, and claiming that it is commits some sort of essentialism which doesn’t have a name yet...computational essentialism? You could take the exact same computation that represents suffering in one creature and re-purpose it into a different purpose entirely by changing the other computations with which it interacts. You can’t just point to some computation and say, “this is Suffering, no matter what the surrounding context is”.
I’m sorry, you don’t seem to be applying the rigor. From my POV you’re taking suffering, taking everything that’s important about it, throwing it in the trash can and inventing your own concept that has nothing to do with what people mean when they use the word. Why should I care about this concept you produced from thin air?
Acknowledged. Like I said:
Part of the problem is that, in order to explain my idea, I took certain words and re-defined them away from their common usage to suite my purposes. I don’t know how to say this using the words we have now though. And the other problem is that it’s sloppy. I haven’t thought through this nearly enough.
But your experiential definition of suffering is, by definition, inaccessable. If you define suffering that way, then the word will dissolve later on, much like words like “free will” tend to either dissolve or change definition so drastically that it scarcely seems like the same thing. The definition needs to change because the original definiton doesn’t make sense. Qualia only applies to you, not to others.
I know from personal experience that suffering sucks, and I want less of it in this universe.
(by the way, this is pretty much the definition of the amygdala-vmPFC brand of “empathy” so I’m not sure why you refer to yourself as “low empathy”. Or did you think that by “empathy” I was referring to mere mirroring the affective states of those around you, like how people cry at movies or something?)
comments this long
Can’t be helped I’m afraid—this is one of those situations where brevity would take more effort. Not to worry, I don’t feel offended if people don’t reply to my comments, if that’s why you felt the need to mention that you might not be able to reply!
I’m not sure why you think i confused sociopathy (lack of guilt, sympathetic pain) with sadism (pleasure via pain of others).
I was more concerned about the nonempathy-psychopathy confusion. I’m not offended, but other people will be.
most humans compute altruism via interactions between the amygdala and the vmPFC
You don’t know that, but more importantly naming brain regions doesn’t explain anything. It’s not necessary to bring real brains to the discussion.
referring to mere mirroring the affective states
Perhaps not mere, but that’s how people use the word.
Qualia only applies to you, not to others.
Only if you’re a solipsist. When people claim to have qualia, this is evidence they have qualia, because I have qualia, and they have brains similar to mine.
If we can make a high resolution record of what happens in their brain when they report qualia, we can look at what kind of computation those qualia are, and therefore determine if other agents have them too.
If we can make a high resolution record of what happens in their brain when they report qualia, we can look at what kind of computation those qualia are, and therefore determine if other agents have them too.
I’m confused...you seem to be suggesting that we use behavioral output to determine which parts of the brain are responsible for qualia, which you say should define morality… didn’t you just tell me that I shouldn’t use behavioral output to define my morality?
If we did it the way you said, and looked at the brain to see what happened when people reported perceiving things, we’d find out some cool things about human perception. However, there’s no guarantee that other minds will use the computation. That’s why I’m emphasizing that it’s important to focus on the input-output functions of the algorithm, rather than the content of the algorithm itself. (Again, this does not mean we ignore the algorithm altogether—it means that we look at the algorithm with respect to what it would output for a given input—so we still care about paralyzed people, brains in vats, etc...since we can make guesses as to what they would output given minor changes to the situation.).
(Not to mention, there is a cascade of things happening from the moment your eyes perceive red to the moment your mouth outputs “Yeah, that’s red” and looking at an actual brain will tell you nothing about which part of the computation gets the “qualia” designation. At best, you’ll find some central hubs which handle information from many parts. Qualia, like free will, is a philosophical question—all the neuroscience knowledge in the world won’t help answer it. Neuroscience might help eliminate some obviously wrong hypotheses, as it did with free will, but fundamentally this is a question that can and should be settled without neuroscience. )
didn’t you just tell me that I shouldn’t use behavioral output to define my morality?
There’s probably a lot of misunderstanding going on between us. I thought you meant you always need the output. In my interpretation you only need the output once for a particular qualia in the optimal situation. After that, you can just start scanning brains or programs for similar computations. How much output we need, if any, depends on at what stage of understanding we are.
However, there’s no guarantee that other minds will use the computation.
True. However, if the reporting of qualia corresponds to certain patterns of brain activity, and that brain activity can be expressed mathematically, then we have a computation and we can think about other ways the computation could be performed. We might even be able to test different forms of the computation on EMs, and see what they report.
“Yeah, that’s red” and looking at an actual brain will tell you nothing about which part of the computation gets the “qualia” designation.
This is incorrect, because there are temporal differences in brain activity. Light on your retina doesn’t instantly transfer information to all parts of your brain responsible for visual processing. Also, there’s no theoretical limitation on temporarily disabling certain brain areas or even single neurons, and examining how that corresponds to reporting of qualia.
Qualia, like free will, is a philosophical question—all the neuroscience knowledge in the world won’t help answer it.
You should think about this further. How much would you be willing to bet that unconscious people experience qualia? How about rocks?
Also, there’s no theoretical limitation on temporarily disabling certain brain areas or even single neurons, and examining how that corresponds to reporting of qualia.
Sure, but disable one part and the person won’t be able to verbally report information about the input, but can still use information from the input non-verbally. Which part is the “qualia” part?
You should think about this further. How much would you be willing to bet that unconscious people experience qualia? How about rocks?
I can’t bet on that until we agree upon a definition of qualia. Personally, as per the definition that makes coherent sense to me, qualia is the section of reality that I’ve got access to (and epistemology is an attempt to understand the most parsimonious system that explains my qualia). I don’t think it makes sense for anyone to talk about qualia, except in reference to themselves in the current moment. I suppose I’m a “soft” solopsist.
On the other hand, I like to define “consciousness” as “self-aware + environment-aware”. So to answer the spirit of the question, I’ll take qualia to mean “awareness”, and then we can at least say that interacting with something is a necessary condition for being “aware” of it. So rocks can’t be very aware, since they aren’t really interacting much with anything....whereas the various brain sections of unconscious people are sometimes interacting with themselves, so they might sometimes be self aware.
misunderstanding
I think the crux of that disagreement is as follows:
You think the algorithm matters morally, and the input-output function is relevant insofar as it gives us information about what the various algorithms mean.
I think the input-output function is what matters morally, and the algorithm is relevant insofar as gives us information about what the input- outputs function is.
-- Stop reading here if brevity is important, otherwise...
To turn this into a more concrete problem: Suppose algorithm X made people cry and verbally report that they feel sad. You conclude that X is sadness. I conclude that X implements sadness.
If we then take X and modified all the things to which X was connected to, such that it now makes people smile and verbally report that they feel happy, I say that X now implement happiness.
I’m guessing you’d say that X was never “happiness” in the first place, and “happiness” is actually in the interaction between X and the surrounding regions.
My argument is that there are infinite configurations of X and its surroundings. Since our judgement of what X+surroundings signifies finally depends on the output, it’s the output that really matters. If someone came to us saying they were in pain, we’d immediately care because of the output—it wouldn’t matter what the circuitry creating the pain looked like.
If a shallow mechanism for generating the output breaks (say, spinal cord injury) then we know what the output would be in a mildly counter-factual scenario, and that’s what matters morally.
The degree to which we need to make counter-factual assumptions before getting to output is important as well—on one extreme, if we are looking at a blank slate and we have to counter factually assume the entire brain, the object has no moral significance. If we just have to counter-factually assume someone’s spinal cord is repaired, there is high moral significance. Something like a coma states would be an intermediate scenario...the question is basically, how much information do we have to add to this algorithm before it generates meaningful output.
(Note: the above paragraph’s reasoning is re-purposed—was originally made for settling abortion and person-hood debates)
Another edge-case: Suppose you had a conscious being which was happy, but contained intact, suffering human brains in its algorithm. Because it would only take a very slight counter-factual modification to get those suffering human brains to generate suffering behavioral output, we still care about them morally.
Mind projection fallacy is when you confuse map with territory and preferences with facts. What I’m doing is assuming other humans are like me—a heuristic which does in fact generally work.
There is a big difference between pain and suffering, though there is certainly some overlap. Suffering is the important one to define.
why wouldn’t we be satisfied with “everyone who has experienced suffering knows what it is, and that’s as good a definition we can get with modern science”
Because this “definition” does not help us figure out whether low-complexity WBEs suffer the same way humans do.
You didn’t answer my question. My point was that pain is simpler than suffering, and even scientists who study it can’t objectively define it.
Because this “definition” does not help us figure out whether low-complexity WBEs suffer the same way humans do.
Are you suggesting we shouldn’t even talk about their potential suffering then? On the same grounds we shouldn’t talk about animal suffering either. That human beings suffer is evidence for low-complexity WBEs and animals being capable of that too.
By the time we can make low-complexity WBEs we’ll probably have some understanding of what suffering computationally is, but it might be too late start philosophizing about it then.
You didn’t answer my question. My point was that pain is simpler than suffering, and even scientists who study it can’t objectively define it.
First, the “objective” part of pain is known as nociception and can likely be studied in real or simulated organisms. The subjective part of pain need not be figured out separately from other qualia, like perception of color red.
Second, not all pain is suffering and not all suffering is pain, so figuring out the quale of suffering is separate from studying pain.
Are you suggesting we shouldn’t even talk about their potential suffering then?
I think we have to work on formalizing qualia in general before we can make progress in understanding “computational suffering” specifically.
it might be too late start philosophizing about it then
I find philosophizing without the goal of separating a solvable chunk of a problem at hand a futile undertaking and a waste of time. The linked paper does a poor job identifying solvable problems.
You were so busy refuting me you still didn’t answer this question: what kind of a definition of suffering would satisfy you? So that people could talk about it without it being a waste of time, y’know.
First, the “objective” part of pain is known as nociception and can likely be studied in real or simulated organisms
In the future? Yes. Right now? No. We have no idea what kind of computation happens in the brain when someone experiences pain. Just because it has a name doesn’t mean we have clue.
Second, not all pain is suffering and not all suffering is pain, so figuring out the quale of suffering is separate from studying pain.
I agree. Do you agree that pain is simpler than suffering and therefore the easier problem and more likely to be solved first?
I think we have to work on formalizing qualia in general before we can make progress in understanding “computational suffering” specifically.
I know I can suffer. If a simple WBE is made from my brain it inherits similarities to my brain and this is evidence it can suffer, the same way a complex mammalian brain has similarities to my brain and this is evidence it can suffer. Do you find these ideas objectionable? What do you mean by formalizing qualia?
I find philosophizing without the goal of separating a solvable chunk of a problem at hand a futile undertaking and a waste of time. The linked paper does a poor job identifying solvable problems.
Could be so. I’m not defending the paper, and I suggest you shouldn’t assume everyone who reads your comment about it read it.
This exchange does not seem to be going anywhere, so I’ll just leave my final comments before disengaging, feel free to do likewise.
The paper draft is an interesting and comprehensive survey of views on em suffering and related (meta)ethics
It does not do a good job defining its subject matter and thus does not advance the field of em ethics
One potential avenue of progress in em ethics and “em rights” is to define suffering in an externally measurable way for various levels of em complexity and architecture.
What kind of a definition would satisfy you?
IASP defines pain as “An unpleasant sensory and emotional experience associated with actual or potential tissue damage, or described in terms of such damage.”
I’m not sure that definition can be understood without having experienced pain, or other unpleasant sensations. So if we can’t even have an objective scientific definition of pain, why wouldn’t we be satisfied with “everyone who has experienced suffering knows what it is, and that’s as good a definition we can get with modern science”?
Because then the dualists will win! /s
I guess I take it as a general principle of epistemology that things which cannot be defined rigorously in some language, without contradiction, don’t exist?
In any case, I don’t think coming up with a definition of “suffering” is that hard. I rather like my definition. I came up with it while trying to settle ethical questions concerning non-human animals.
Taboo suffering. What’s the bad thing that we want to avoid?
I don’t like it when other humans experience things which are extremely contrary to their preferences (I have altruism).
Humans are not the only class of things for which I experience altruism. Let’s define a class of things towards which I experience altruism as “person-like beings”.
a Being is 1) a type of object 2) which manipulates its surroundings in a pattern 3) which suggests that it has certain goals. The defining property of a Being is intelligence. A Paperclipper is a Being, but not a person...I feel no altruism towards the paperclipper, because while it’s intelligent it is not a Person to me.
When I probe my moral intuition, the Personish-ness of a Being seems to be most strongly related to its degree of preference for all other person-like beings to have their preferences fulfilled. I think this is the only necessary condition for me to feel altruism towards an object, but I’m uncertain. So the defining quality of a Person, then, is Empathy for others, with an intelligence multiplier.
In any case, the Bad Thing we want to prevent is the existence of Person-Beings who are not having their preferences fulfilled.
In other words, “suffering” is when an intelligent and empathetic object does not get its preferences fulfilled.
(Also, yes, I bite the bullet—humans with less empathy are less person-like within the semantic framework constructed above. I think most humans are roughly in the same spectrum of intelligence for moral purposes, but in edge cases i have to bite that bullet as well—though I’m a bit less comfortable with that. The “empathy” weighting seems much more important that the “intelligence” weighting.)
...Admittedly this is very rough and ’m sure you can poke holes in that (for example, the fact that the definition of person-like is self-referential could be exploited), but as per my own moral intuitions it seems roughly accurate. I think that if unstructured idle thought produces something that seems close to correct then, with sufficient thinking and modification we could come up with something that is correct.
Upvoted you back to zero. Let me try to poke a few holes, in good will.
If you just look at the behavioural instead of the experiental aspect of suffering, this already eliminates anything that could be normally understood by the word.
Preferences do not depend on the existence suffering, although suffering seems to depend on the existence preferences.
Taboo person.
Are you saying that nonempathetic human beings can’t suffer? I find that claim bizarre.
I’m a human being with less empathy, and I’m ready to protect my preferences, so be careful ;)
With the current state of science, I personally don’t think we need to define suffering any more than we need to define colours. Once we know what happens in the brain when a person reports they are suffering, we know what suffering is and how to measure it.
Just to clarify, I’m talking about the “I don’t care about the suffering of other humans” / sociopathy sort of no-empathy, not the “I have trouble interpreting facial expressions” / autism sort of no-empathy. It’s unfortunate that we use the same word for those. Some psychologists use “Hot empathy (feeling)” and “Cold empathy (perception)” to differentiate.
And, since I can’t look at your brain directly, I wouldn’t actually feel reduced altruism towards you unless you actually did an action which demonstrated callous disregard for other people (self-diagnoses of sociopathy are insufficient evidence that someone actually doesn’t care about other people).
However, if you really are one who doesn’t experience hot empathy, then you aren’t really allowed to be offended by the fact that I feel reduced hot empathy towards you, because that’s just tit for tat. ;)
Although, I don’t actually care about Hot Empathy either . What I care about are your preferences—do you care about others as a (non-instrumental) value? Hot Empathy is where most humans derive their altruistic preferences from, but if you derive altruistic preferences via some other route then that works for me.
Bleh...yeah. It is bizarre. How about we don’t call it “suffering” , and just focus on “bad thing that we want to avoid.” for now.
It seems like humans typically only extend altruism towards things which reciprocate altruism in return. Why are humans more bothered by the suffering of dogs than they are by the suffering of pigs, though the two animals are of comparable intellect? Other than mere familiarity, it’s because the former reciprocates altruism. It’s harder to slaughter something that shows affection towards you.
I have a dream, that one day agents will be judged not by the substrate of their code, but by the behavioral output of whatever algorithm they run.
First of all, not doing so violates the anti-zombie principle, and second of all, if we interact with aliens or AI, I want us to be friends, and I want AI we design to consider them as friends too. So...if you want to define “suffering” to be referring to specific algorithms, I’m comfortable with that...but this discussion really isn’t about suffering, is it? It’s about morality. And morality shouldn’t care what sort of substrate your algorithm runs on, nor should it really care what specific algorithm you use except with regard to its output. (Though I can think of some fun edge cases here if you want to talk about that)
Like the paper said,
I’m more trying to get at what is morally relevant about suffering, not defining suffering itself. Language is filled with fuzzy categories that dissolve under the application of rigor.
Okay, so in this context, being classed as a Person means that in addition to intelligence, the following thing is approximately true:
“I care about the preferences of all agents X who have this statement embedded in their algorithm”.
(Yes, this is uncomfortable for me too. I haven’t worked out a non-self-referential version.)
So, if snakes don’t even help each other, they aren’t people at all save for their tiny little spark of intelligence. If bees help each other sometimes, but never other species, they aren’t people at all because the altruism is only directed towards the colony.
When mice altruistically free other rodents from cages, it is a spark personhood...but it’s limited because the mouse will only do this for species who display affective cues which it can understand. It doesn’t have the cognitive capacity to understand emotions in the abstract and apply that knowledge to, say, altruism towards a human or bird. So its not very person-ish...but we certainly wouldn’t torture it.
Dolphins, dogs, elephants, apes, etc...show cross species altruism and a high degree of intelligence. They are very person-ish. We should be really nice to them, in proportion to how person-ish they are.
A Paperclipper is sort of like a hyper-intelligent bee or snake. We don’t really care how it feels. A Friendly SAI, on the other hand, is even more person-ish than a human. We’d never want to violate the preferences of a Friendly SAI. (this is rather tautological, of course)
Part of the problem is that, in order to explain my idea, I took certain words and re-defined them away from their common usage to suite my purposes. I don’t know how to say this using the words we have now though. And the other problem is that it’s sloppy. I haven’t thought through this nearly enough.
But the general direction feels both intuitively comfy when I apply it to animals (that’s the closest thing to a truly alien mind that we have right now) and comes with the bonus of being somewhat pragmatic with its tit-for-tat attitude towards what sorts of beings we should be friendly towards.
I’m not going to be able to adequately answer comments this long in the future, especially because I disagree with the bulk of their content. You’re making a huge number of underlying assumptions you don’t seem to be explicitly stating, and it seems you’re not aware of those assumptions either.
I think you’re committing the typical mind fallacy here. It seems you have a lot of hot empathy, so because that is the most visible part of your altruistic cognition, you easily think it’s the only one. Some of your thinking seems to be motivated by this.
See my comments in this thread if you’re confused by what I say so that I don’t have to reiterate myself. You said in a later paragraph you care about my preferences and I bet our preferences are pretty similar, despite our emotional life probably being quite different.
Psychopathy and sociopathy are much wider concepts than nonempathy. Even these wider concepts don’t imply sadism either. Be careful not confuse them, as that has potential to insult a lot of people.
Could be. Do you find this principle morally sound? Do you propose being altruistic only towards people who can reciprocate it to you? Can that be called altruism?
That’s fine if we have no methods that are more direct. If you knew what kind of computation suffering is, and you can directly find out if someone suffers by scanning their brain, why on earth would you not rather use that?
Insisting on visible behavioural output means you don’t care about paralyzed people. I think insisting on visible output is the part that confuses your thinking the most.
You need to have terminal values to talk about morality, and as far as I’m concerned terminal values in human beings are in many situations, not all, determined by their affects, like suffering.
Because the bad thing most people want to avoid is suffering, and you’re butchering the concept.
I’ve got no problem with your goal, but I’m sorry, you don’t seem to be applying the rigor. From my POV you’re taking suffering, taking everything that’s important about it, throwing it in the trash can and inventing your own concept that has nothing to do with what people mean when they use the word. Why should I care about this concept you produced from thin air?
All I can say about this is that whether some computation is a person doesn’t affect my altruism towards them whatsoever.
I do care about whether a snake or a bee has the computational equivalent of suffering happening in their brains, because I know from personal experience that suffering sucks, and I want less of it in this universe. I might care about what a paper clipper feels, but that would be dwarfed in importance by everything else that it does.
Affects like suffering are not the only factor when I’m deciding where to extend my altruism either, since my resources are limited.
Mind projection fallacy is when you confuse map with territory and preferences with facts. What I’m doing is assuming other humans are like me—a heuristic which does in fact generally work.
But even so, I did mention:
Does that amelieorate the criticism?
Does that mean you are offended? My apologies if so, I should have been more precise with langauge. However, I’m not sure why you think i confused sociopathy (lack of guilt, sympathetic pain) with sadism (pleasure via pain of others). Those two are almost opposites.
Of course not. You still have to use the computation, but morally speaking your interested in the outputs of the computation. In the case of the paralized person, you look at their brain, see what their outputs would be if they were in a different situation, and act accordingly.
The reason we can’t just define suffering as a specific computation present in the brain is because when we are faced with other minds who use different computations to arrive at a roughly same output per input, we won’t recognize them as suffering...unless we define suffering in relation to intput-output in the first place.
For example, most humans compute altruism via interactions between the amygdala and the vmPFC. Now, if someone doesn’t compute altruism that way, but still exhibits altruistic behavior...then isn’t it exactly the same thing? Weren’t you disturbed when you thought that I was presuming to judge a person based on their internal states rather than their behavior previously in this conversation?
We obviously still look at the computation, but the reason we are looking is to figure out what it wishes to output in response to various inputs. That’s what a computation is...a bridge between inputs into outputs.
I’m not sure if I’m explaining this correctly...a computation can’t be intrinsically suffering or intrinsically pleasure, and claiming that it is commits some sort of essentialism which doesn’t have a name yet...computational essentialism? You could take the exact same computation that represents suffering in one creature and re-purpose it into a different purpose entirely by changing the other computations with which it interacts. You can’t just point to some computation and say, “this is Suffering, no matter what the surrounding context is”.
Acknowledged. Like I said:
But your experiential definition of suffering is, by definition, inaccessable. If you define suffering that way, then the word will dissolve later on, much like words like “free will” tend to either dissolve or change definition so drastically that it scarcely seems like the same thing. The definition needs to change because the original definiton doesn’t make sense. Qualia only applies to you, not to others.
(by the way, this is pretty much the definition of the amygdala-vmPFC brand of “empathy” so I’m not sure why you refer to yourself as “low empathy”. Or did you think that by “empathy” I was referring to mere mirroring the affective states of those around you, like how people cry at movies or something?)
Can’t be helped I’m afraid—this is one of those situations where brevity would take more effort. Not to worry, I don’t feel offended if people don’t reply to my comments, if that’s why you felt the need to mention that you might not be able to reply!
I guess I’ll just be brief myself then.
Typical mind fallacy.
I was more concerned about the nonempathy-psychopathy confusion. I’m not offended, but other people will be.
You don’t know that, but more importantly naming brain regions doesn’t explain anything. It’s not necessary to bring real brains to the discussion.
Perhaps not mere, but that’s how people use the word.
Only if you’re a solipsist. When people claim to have qualia, this is evidence they have qualia, because I have qualia, and they have brains similar to mine.
If we can make a high resolution record of what happens in their brain when they report qualia, we can look at what kind of computation those qualia are, and therefore determine if other agents have them too.
I’m confused...you seem to be suggesting that we use behavioral output to determine which parts of the brain are responsible for qualia, which you say should define morality… didn’t you just tell me that I shouldn’t use behavioral output to define my morality?
If we did it the way you said, and looked at the brain to see what happened when people reported perceiving things, we’d find out some cool things about human perception. However, there’s no guarantee that other minds will use the computation. That’s why I’m emphasizing that it’s important to focus on the input-output functions of the algorithm, rather than the content of the algorithm itself. (Again, this does not mean we ignore the algorithm altogether—it means that we look at the algorithm with respect to what it would output for a given input—so we still care about paralyzed people, brains in vats, etc...since we can make guesses as to what they would output given minor changes to the situation.).
(Not to mention, there is a cascade of things happening from the moment your eyes perceive red to the moment your mouth outputs “Yeah, that’s red” and looking at an actual brain will tell you nothing about which part of the computation gets the “qualia” designation. At best, you’ll find some central hubs which handle information from many parts. Qualia, like free will, is a philosophical question—all the neuroscience knowledge in the world won’t help answer it. Neuroscience might help eliminate some obviously wrong hypotheses, as it did with free will, but fundamentally this is a question that can and should be settled without neuroscience. )
There’s probably a lot of misunderstanding going on between us. I thought you meant you always need the output. In my interpretation you only need the output once for a particular qualia in the optimal situation. After that, you can just start scanning brains or programs for similar computations. How much output we need, if any, depends on at what stage of understanding we are.
True. However, if the reporting of qualia corresponds to certain patterns of brain activity, and that brain activity can be expressed mathematically, then we have a computation and we can think about other ways the computation could be performed. We might even be able to test different forms of the computation on EMs, and see what they report.
This is incorrect, because there are temporal differences in brain activity. Light on your retina doesn’t instantly transfer information to all parts of your brain responsible for visual processing. Also, there’s no theoretical limitation on temporarily disabling certain brain areas or even single neurons, and examining how that corresponds to reporting of qualia.
You should think about this further. How much would you be willing to bet that unconscious people experience qualia? How about rocks?
Sure, but disable one part and the person won’t be able to verbally report information about the input, but can still use information from the input non-verbally. Which part is the “qualia” part?
I can’t bet on that until we agree upon a definition of qualia. Personally, as per the definition that makes coherent sense to me, qualia is the section of reality that I’ve got access to (and epistemology is an attempt to understand the most parsimonious system that explains my qualia). I don’t think it makes sense for anyone to talk about qualia, except in reference to themselves in the current moment. I suppose I’m a “soft” solopsist.
On the other hand, I like to define “consciousness” as “self-aware + environment-aware”. So to answer the spirit of the question, I’ll take qualia to mean “awareness”, and then we can at least say that interacting with something is a necessary condition for being “aware” of it. So rocks can’t be very aware, since they aren’t really interacting much with anything....whereas the various brain sections of unconscious people are sometimes interacting with themselves, so they might sometimes be self aware.
I think the crux of that disagreement is as follows:
You think the algorithm matters morally, and the input-output function is relevant insofar as it gives us information about what the various algorithms mean.
I think the input-output function is what matters morally, and the algorithm is relevant insofar as gives us information about what the input- outputs function is.
-- Stop reading here if brevity is important, otherwise...
To turn this into a more concrete problem: Suppose algorithm X made people cry and verbally report that they feel sad. You conclude that X is sadness. I conclude that X implements sadness.
If we then take X and modified all the things to which X was connected to, such that it now makes people smile and verbally report that they feel happy, I say that X now implement happiness.
I’m guessing you’d say that X was never “happiness” in the first place, and “happiness” is actually in the interaction between X and the surrounding regions.
My argument is that there are infinite configurations of X and its surroundings. Since our judgement of what X+surroundings signifies finally depends on the output, it’s the output that really matters. If someone came to us saying they were in pain, we’d immediately care because of the output—it wouldn’t matter what the circuitry creating the pain looked like.
If a shallow mechanism for generating the output breaks (say, spinal cord injury) then we know what the output would be in a mildly counter-factual scenario, and that’s what matters morally.
The degree to which we need to make counter-factual assumptions before getting to output is important as well—on one extreme, if we are looking at a blank slate and we have to counter factually assume the entire brain, the object has no moral significance. If we just have to counter-factually assume someone’s spinal cord is repaired, there is high moral significance. Something like a coma states would be an intermediate scenario...the question is basically, how much information do we have to add to this algorithm before it generates meaningful output.
(Note: the above paragraph’s reasoning is re-purposed—was originally made for settling abortion and person-hood debates)
Another edge-case: Suppose you had a conscious being which was happy, but contained intact, suffering human brains in its algorithm. Because it would only take a very slight counter-factual modification to get those suffering human brains to generate suffering behavioral output, we still care about them morally.
He said “typical mind fallacy”, not “mind projection fallacy”.
oops. thanks!
There is a big difference between pain and suffering, though there is certainly some overlap. Suffering is the important one to define.
Because this “definition” does not help us figure out whether low-complexity WBEs suffer the same way humans do.
You didn’t answer my question. My point was that pain is simpler than suffering, and even scientists who study it can’t objectively define it.
Are you suggesting we shouldn’t even talk about their potential suffering then? On the same grounds we shouldn’t talk about animal suffering either. That human beings suffer is evidence for low-complexity WBEs and animals being capable of that too.
By the time we can make low-complexity WBEs we’ll probably have some understanding of what suffering computationally is, but it might be too late start philosophizing about it then.
First, the “objective” part of pain is known as nociception and can likely be studied in real or simulated organisms. The subjective part of pain need not be figured out separately from other qualia, like perception of color red.
Second, not all pain is suffering and not all suffering is pain, so figuring out the quale of suffering is separate from studying pain.
I think we have to work on formalizing qualia in general before we can make progress in understanding “computational suffering” specifically.
I find philosophizing without the goal of separating a solvable chunk of a problem at hand a futile undertaking and a waste of time. The linked paper does a poor job identifying solvable problems.
You were so busy refuting me you still didn’t answer this question: what kind of a definition of suffering would satisfy you? So that people could talk about it without it being a waste of time, y’know.
In the future? Yes. Right now? No. We have no idea what kind of computation happens in the brain when someone experiences pain. Just because it has a name doesn’t mean we have clue.
I agree. Do you agree that pain is simpler than suffering and therefore the easier problem and more likely to be solved first?
I know I can suffer. If a simple WBE is made from my brain it inherits similarities to my brain and this is evidence it can suffer, the same way a complex mammalian brain has similarities to my brain and this is evidence it can suffer. Do you find these ideas objectionable? What do you mean by formalizing qualia?
Could be so. I’m not defending the paper, and I suggest you shouldn’t assume everyone who reads your comment about it read it.
This exchange does not seem to be going anywhere, so I’ll just leave my final comments before disengaging, feel free to do likewise.
The paper draft is an interesting and comprehensive survey of views on em suffering and related (meta)ethics
It does not do a good job defining its subject matter and thus does not advance the field of em ethics
One potential avenue of progress in em ethics and “em rights” is to define suffering in an externally measurable way for various levels of em complexity and architecture.
Just so you know, I probably came off more confrontational than was my intention. Sorry about that if true.
I agree it’s better to halt these kinds of spats than try to find a fix after shit hits the fan.