This is a question about utilitarianism, not AI, but can anyone explain (or provide a link to an explanation) of why reducing the total suffering in the world is considered so important? I thought that we pretty much agreed that morality is based on moral intuitions and it seems pretty counterintuitive to value the states of mind of people too numerous to sympathize with as highly as people here do.
It seems to me that reducing suffering in a numbers game is the kind of thing you would say is your goal because it makes you sound like a good person, rather than something your conscience actually motivates you to do, but people here are usually pretty averse to conscious signaling, so I’m not sure that works as an explanation. I’m certain this has been covered elsewhere, but I haven’t seen it.
When I become directly acquainted with an episode of intense suffering, I come to see that this is a state of affairs that ought not to exist. My empathy may be limited, but I don’t need to empathize with others to recognize that, when they suffer, their suffering ought to be relieved too.
I don’t pretend to speak on behalf of all other hedonistic utilitarians, however. Brian himself would probably disagree with my answer. He would instead reply that he “just cares” about other people’s suffering, and that’s that.
Knowing that you’ve abandoned moral realism, how would you respond to someone making an analogous argument about preferences or duties? For instance, “When a preference of mine is frustrated, I come to see this as a state of affairs that ought not to exist,” or “When someone violates a duty, I come to see this as a state of affairs that ought not to exist.” Granted, the acquaintance may not be as direct as in the case of intense suffering. But is that enough to single out pleasure and suffering?
Preventing suffering is what I care about, and I’m going to try to convince other people to care about it. One way to do that is to invent plausible thought experiments / intuition pumps for why it matters so much. If I do, that might help with evangelism, but it’s not the (original) reason why I care about it. I care about it because of experience with suffering in my own life, feeling strong empathy when seeing it in others, and feeling that preventing suffering is overridingly important due to various other factors in my development.
It seems to me that reducing suffering in a numbers game is the kind of thing you would say is your goal because it makes you sound like a good person
I am not sure that the hedonistic utilitarian agenda is high status. The most plausible cynical/psychological critique of the hedonistic utilitarian agenda, is that they are too worried about ethical consistency and about coherently extrapolating a simple principle from their values.
Cooperation for mutual benefit. Potential alliance building. Signalling of reliability, benevolence, and capability. It’s often beneficial to adopt a general policy of helping strangers whenever the personal price is low enough. And (therefore) the human mind is such that people mostly enjoy helping others as long as it’s not too strenuous.
You could reduce human suffering to 0 by reducing the number of humans to 0, so there’s got to be another value greater than reducing suffering.
It seems plausible to me that suffering could serve some useful purpose & eliminating it (or seeking to eliminate it) might have horrific consequences.
You could reduce human suffering to 0 by reducing the number of humans to 0, so there’s got to be another value greater than reducing suffering.
Almost all hedonistic utilitarians are concerned with maximizing happiness as well as minimizing suffering, including Brian. The reason that he talks about suffering so much is because, it is most people rank a unit of suffering as, say a −3 experience and a unit of suffering as, say, a −1 experience. And he thinks that there is much more suffering than happiness in the world and that it easier to prevent it.
Thanks, Jabberslythe! You got it mostly correct. :)
The one thing I would add is that I personally think people don’t usually take suffering seriously enough—at least not really severe suffering like torture or being eaten alive. Indeed, many people may never have experienced something that bad. So I put high importance on preventing experiences like these relative to other things.
I’m not strongly emotionally motivated to reduce suffering in general but I realize that my and other instances of suffering are examples of suffering in general so I think it’s a good policy to try to reduce world-suck. This is reasonably approximated by saying I would like to reduce unhappiness or increase happiness or some such thing.
This is a question about utilitarianism, not AI, but can anyone explain (or provide a link to an explanation) of why reducing the total suffering in the world is considered so important? I thought that we pretty much agreed that morality is based on moral intuitions and it seems pretty counterintuitive to value the states of mind of people too numerous to sympathize with as highly as people here do.
It seems to me that reducing suffering in a numbers game is the kind of thing you would say is your goal because it makes you sound like a good person, rather than something your conscience actually motivates you to do, but people here are usually pretty averse to conscious signaling, so I’m not sure that works as an explanation. I’m certain this has been covered elsewhere, but I haven’t seen it.
When I become directly acquainted with an episode of intense suffering, I come to see that this is a state of affairs that ought not to exist. My empathy may be limited, but I don’t need to empathize with others to recognize that, when they suffer, their suffering ought to be relieved too.
I don’t pretend to speak on behalf of all other hedonistic utilitarians, however. Brian himself would probably disagree with my answer. He would instead reply that he “just cares” about other people’s suffering, and that’s that.
Knowing that you’ve abandoned moral realism, how would you respond to someone making an analogous argument about preferences or duties? For instance, “When a preference of mine is frustrated, I come to see this as a state of affairs that ought not to exist,” or “When someone violates a duty, I come to see this as a state of affairs that ought not to exist.” Granted, the acquaintance may not be as direct as in the case of intense suffering. But is that enough to single out pleasure and suffering?
Preventing suffering is what I care about, and I’m going to try to convince other people to care about it. One way to do that is to invent plausible thought experiments / intuition pumps for why it matters so much. If I do, that might help with evangelism, but it’s not the (original) reason why I care about it. I care about it because of experience with suffering in my own life, feeling strong empathy when seeing it in others, and feeling that preventing suffering is overridingly important due to various other factors in my development.
Thanks, Brian. I know this is your position, I’m wondering if it’s benthamite’s as well.
I am not sure that the hedonistic utilitarian agenda is high status. The most plausible cynical/psychological critique of the hedonistic utilitarian agenda, is that they are too worried about ethical consistency and about coherently extrapolating a simple principle from their values.
Cooperation for mutual benefit. Potential alliance building. Signalling of reliability, benevolence, and capability. It’s often beneficial to adopt a general policy of helping strangers whenever the personal price is low enough. And (therefore) the human mind is such that people mostly enjoy helping others as long as it’s not too strenuous.
You could reduce human suffering to 0 by reducing the number of humans to 0, so there’s got to be another value greater than reducing suffering.
It seems plausible to me that suffering could serve some useful purpose & eliminating it (or seeking to eliminate it) might have horrific consequences.
Almost all hedonistic utilitarians are concerned with maximizing happiness as well as minimizing suffering, including Brian. The reason that he talks about suffering so much is because, it is most people rank a unit of suffering as, say a −3 experience and a unit of suffering as, say, a −1 experience. And he thinks that there is much more suffering than happiness in the world and that it easier to prevent it.
(Sorry if I got any of this wrong Brian)
Thanks, Jabberslythe! You got it mostly correct. :)
The one thing I would add is that I personally think people don’t usually take suffering seriously enough—at least not really severe suffering like torture or being eaten alive. Indeed, many people may never have experienced something that bad. So I put high importance on preventing experiences like these relative to other things.
I’m not strongly emotionally motivated to reduce suffering in general but I realize that my and other instances of suffering are examples of suffering in general so I think it’s a good policy to try to reduce world-suck. This is reasonably approximated by saying I would like to reduce unhappiness or increase happiness or some such thing.