Humans seem very empathetic to me, since we do worry about, for example, the treatment of animals—perhaps not planaria, but the important thing here is that we would dependably worry if we thought they minded. I cannot think of any mental distress that we would not be concerned about, no matter how far removed the ‘organism’.
But of course we are also very cruel, and I do see that as perversion, because it is empathy turned against itself. For example, regarding our fascination with torture, at least that departs from the concept that torture is bad. Cruelty is ‘interesting’ because it stimulates our empathy. We wouldn’t care about causing harm (causing harm for its own sake) if we didn’t understand it was harm in the first place.
Whether people have empathy for animals varies a lot. I know at least one prominent Less Wronger who when discussing vegetarianism said recently that his utility function doesn’t have a term for non-human animals suffering. Moreover, history is clear that many humans don’t even have more much empathy for much beyond their own tribal group, and even when they have a theory of mind good enough to deceive and fight them, they still might not care.
If the beings who make the simulation are much smarter than we are, then I see no reason why they wouldn’t take our suffering about as seriously as many humans take the suffering of animals for medical research, or even more blunt, how some humans take hunting or dog-baiting or cockfighting, or a hundred other activities that cause pain and suffering to with animals for their sheer amusement.
(Incidentally, I’m curious, do you think this universe looks to you like one which has creators who care for their creations?)
Sometimes humans have empathy towards suffering things, sometimes they don’t. I guess you could say this is a capacity to not have empathy or a capacity to have empathy and our difference about human empathy is a cup half-full or half-empty thing. Of course, as a predictor of how aliens would treat us, the fact that humans aren’t consistently empathetic would be a prediction that aliens might not treat us well. I don’t expect aliens to treat us well, whereas I expect our simulators would. Perhaps I am giving the simulators too much credit, intelligence-wise and empathy-wise, for how much I am grateful for certain aspects of the universe. Maybe they just cut and pasted a lot from their own universe and I give them too much credit.
If the beings who make the simulation are much smarter than we are, then I see no reason why they wouldn’t take our suffering about as seriously as many humans take the suffering of animals for medical research,
I suppose, but nevertheless I’m OK with this. It would be nice to have a purpose.
or even more blunt, how some humans take hunting or dog-baiting or cockfighting, or a hundred other activities that cause pain and suffering to with animals for their sheer amusement.
While I would expect these behaviors from any evolutionary evolved intelligence (excepting whales perhaps), they are so contradictory with other evolved traits, I think they must be transient. For example, many people don’t enjoy such things at all and cockfighting is illegal where I live.
If such threads are not meant to be transient, then I am wrong about all of this.
While I would expect these behaviors from any evolutionary evolved intelligence (excepting whales perhaps), they are so contradictory with other evolved traits, I think they must be transient. For example, many people don’t enjoy such things at all and cockfighting is illegal where I live.
Sure, many people don’t but how much of that is simply due to cultural norms? Many such activities are outlawed more because they are associated with lower classes or marginalized groups. Look at how in the United States hunting is in many areas a popular past-time, while in most of the US dogfighting is illegal. Why? Well, without delving too much into the mindkilling of politics, dogfighting is a sport historically popular with lower-income black people, while hunting is popular among a variety of different income groups among white people.
Among humans it does seem like the general trend among humans is towards more empathy and caring. But for another species, even if we think that such a trend will occur, there’s no reason to think that that trend will outpace the growth of technology enough that they will not want to cause harm to their sims.
Oops, I just realized that in this last comment (the sibling to this one) I blurred two compartments of thought. I don’t mind that I have different compartments, but I consider it a failure if I cannot remain in one throughout a thread. I guess what happened is that you convinced me there is reason to be cynical about human empathy, which became cynicism about human value, which inevitably leads to a set of grooves about value drift and my dissatisfaction with the lack of a framework of objective value (“FOOV”). So if you had the impression I switched gears regarding my initial position, you are correct.
By the way, I don’t consider cynicism or optimism about human moral progress to be a factual matter, but two perspectives of the same scene. Over the weekend I attended a meeting that had me swayed in the optimism direction.
Sure, many people don’t but how much of that is simply due to cultural norms?
It is probably entirely the evolution of cultural norms, but why dismiss that? The important question is whether there is a predetermined direction to the evolution of cultural norms, and it seems we agree that a general trend is towards more empathy and caring (with some reservations) but that this isn’t necessarily reliable.
I often think about whether or not humanity is ‘good’ and whether the cultural development of our empathy will outpace other factors, and I’ve settled on the conclusion that if our universe is not designed, it will probably not work out well but if it was designed by benevolent, caring entities it will somehow work out no matter how small the probability.
In other words, without a designer, we’re doomed anyway to a universe of random and arbitrary entities that won’t conform to our (also) random and arbitrary moral preferences. With a designer, there is finally the possibility of a plan (and an imposed external set of moral preferences) and there is some probability (that I count as high) that we are part of the plan and thus we could trust that we would be happy with the outcome of that plan. Where ‘we’ doesn’t necessarily mean us specifically, but future humans or another self-aware lineage or at least the designers themselves. Some set of conscious entities being happy with the universe seems like a good thing to me, better than a random flux of dissatisfied ones.
So to answer your question a couple comments up, at the moment I don’t believe that our universe looks like it was designed by a caring entity, or that humanity is necessarily good. In my mind the problem is that there is no designer. A designer after all would terrifically increase the chances of moral success (for someone’s point of view) compared to a random universe.
Humans seem very empathetic to me, since we do worry about, for example, the treatment of animals—perhaps not planaria, but the important thing here is that we would dependably worry if we thought they minded. I cannot think of any mental distress that we would not be concerned about, no matter how far removed the ‘organism’.
But of course we are also very cruel, and I do see that as perversion, because it is empathy turned against itself. For example, regarding our fascination with torture, at least that departs from the concept that torture is bad. Cruelty is ‘interesting’ because it stimulates our empathy. We wouldn’t care about causing harm (causing harm for its own sake) if we didn’t understand it was harm in the first place.
Whether people have empathy for animals varies a lot. I know at least one prominent Less Wronger who when discussing vegetarianism said recently that his utility function doesn’t have a term for non-human animals suffering. Moreover, history is clear that many humans don’t even have more much empathy for much beyond their own tribal group, and even when they have a theory of mind good enough to deceive and fight them, they still might not care.
If the beings who make the simulation are much smarter than we are, then I see no reason why they wouldn’t take our suffering about as seriously as many humans take the suffering of animals for medical research, or even more blunt, how some humans take hunting or dog-baiting or cockfighting, or a hundred other activities that cause pain and suffering to with animals for their sheer amusement.
(Incidentally, I’m curious, do you think this universe looks to you like one which has creators who care for their creations?)
Sometimes humans have empathy towards suffering things, sometimes they don’t. I guess you could say this is a capacity to not have empathy or a capacity to have empathy and our difference about human empathy is a cup half-full or half-empty thing. Of course, as a predictor of how aliens would treat us, the fact that humans aren’t consistently empathetic would be a prediction that aliens might not treat us well. I don’t expect aliens to treat us well, whereas I expect our simulators would. Perhaps I am giving the simulators too much credit, intelligence-wise and empathy-wise, for how much I am grateful for certain aspects of the universe. Maybe they just cut and pasted a lot from their own universe and I give them too much credit.
I suppose, but nevertheless I’m OK with this. It would be nice to have a purpose.
While I would expect these behaviors from any evolutionary evolved intelligence (excepting whales perhaps), they are so contradictory with other evolved traits, I think they must be transient. For example, many people don’t enjoy such things at all and cockfighting is illegal where I live.
If such threads are not meant to be transient, then I am wrong about all of this.
Sure, many people don’t but how much of that is simply due to cultural norms? Many such activities are outlawed more because they are associated with lower classes or marginalized groups. Look at how in the United States hunting is in many areas a popular past-time, while in most of the US dogfighting is illegal. Why? Well, without delving too much into the mindkilling of politics, dogfighting is a sport historically popular with lower-income black people, while hunting is popular among a variety of different income groups among white people.
Among humans it does seem like the general trend among humans is towards more empathy and caring. But for another species, even if we think that such a trend will occur, there’s no reason to think that that trend will outpace the growth of technology enough that they will not want to cause harm to their sims.
Oops, I just realized that in this last comment (the sibling to this one) I blurred two compartments of thought. I don’t mind that I have different compartments, but I consider it a failure if I cannot remain in one throughout a thread. I guess what happened is that you convinced me there is reason to be cynical about human empathy, which became cynicism about human value, which inevitably leads to a set of grooves about value drift and my dissatisfaction with the lack of a framework of objective value (“FOOV”). So if you had the impression I switched gears regarding my initial position, you are correct.
By the way, I don’t consider cynicism or optimism about human moral progress to be a factual matter, but two perspectives of the same scene. Over the weekend I attended a meeting that had me swayed in the optimism direction.
It is probably entirely the evolution of cultural norms, but why dismiss that? The important question is whether there is a predetermined direction to the evolution of cultural norms, and it seems we agree that a general trend is towards more empathy and caring (with some reservations) but that this isn’t necessarily reliable.
I often think about whether or not humanity is ‘good’ and whether the cultural development of our empathy will outpace other factors, and I’ve settled on the conclusion that if our universe is not designed, it will probably not work out well but if it was designed by benevolent, caring entities it will somehow work out no matter how small the probability.
In other words, without a designer, we’re doomed anyway to a universe of random and arbitrary entities that won’t conform to our (also) random and arbitrary moral preferences. With a designer, there is finally the possibility of a plan (and an imposed external set of moral preferences) and there is some probability (that I count as high) that we are part of the plan and thus we could trust that we would be happy with the outcome of that plan. Where ‘we’ doesn’t necessarily mean us specifically, but future humans or another self-aware lineage or at least the designers themselves. Some set of conscious entities being happy with the universe seems like a good thing to me, better than a random flux of dissatisfied ones.
So to answer your question a couple comments up, at the moment I don’t believe that our universe looks like it was designed by a caring entity, or that humanity is necessarily good. In my mind the problem is that there is no designer. A designer after all would terrifically increase the chances of moral success (for someone’s point of view) compared to a random universe.