My interpretation of the article is not that he’s saying that gauging moral progress is impossible, but that you can’t gauge it by comparing the past with the present. It has to be gauged against an ideal, but the ideal has to be carefully chosen or else the ideal may be useless or self-defeating. I’m sure such an ideal can be constructed (maybe someone has already constructed one that’s widely considered to be acceptable—in that case I’d be interested in finding out about it) but since I’m not aware of a widely accepted ideal against which progress can be gauged at the moment, I’d like to focus on other parts of this. [Edited last statement to remove a source of potential conversational derailment.]
This does inspire me to bring up interesting questions, though, like:
Do I know enough about our past history to know whether it was previously better or worse? What if most native American tribes respected women and gays and abhorred slavery before they were killed off by the settlers? Not to mention the thousands of civilizations that existed prior to these in so many places all over the world.
Might we be causing harm in new ways as well as ceasing to cause harm in other ways, moving backward overall? Even though Americans can’t keep slaves, they do get a lot of their goods from sweatshops. The prejudice against gays may be lessening, but has the prejudice against Middle Easterners increased to the point where it cancels out that progress? Women got the right to vote, but shortly before that, children were forced into the school system. The reason I view this school system as unethical are touched on (in order) here and here.
I wonder if anyone has done thorough research to determine whether we’re moving forward or backward. I would earnestly like to know. It’s a topic I am very interested in. If you have a detailed perspective on this, I’d be interested in reading it.
For archival purposes, the source of potential conversational derailment was:
One ideal against which we could gauge moral progress without it being useless or self-defeating if taken to the extreme would be “Causing less suffering and death is good.”
One ideal against which we could gauge moral progress without it being useless or self-defeating if taken to the extreme would be “Causing less suffering and death is good.”
Well, the most straightforward way to judge success along this metric is to compare the amount of suffering. The problem with this metric is that the contribution of technological progress will dominate any contribution from ethical progress.
Might we be causing harm in new ways as well as ceasing to cause harm in other ways, moving backward overall? Even though Americans can’t keep slaves, they do get a lot of their goods from sweatshops. The prejudice against gays may be lessening, but has the prejudice against Middle Easterners increased to the point where it cancels out that progress? Women got the right to vote, but shortly before that, children were forced into the school system.
Furthermore, it’s not a priori obvious that the contribution to less suffering is what you think it is any of the examples you listed. It’s possible that the people working in “sweatshops” are better off there than wherever they were before, this in fact seems likely since they chose to work there. It’s possible that our modern attitude towards gender roles and sexuality is causing more unhappy marriages and children growing up in bad homes and thus increases suffering; conversely, maybe our attitudes towards gender are correct and our prejudice towards (Muslim) Middle Easterners is encouraging them to adopt it and thus our prejudice is reducing suffering on net. As for the right to vote, well there’s a slight positive effect from making the women feel empowered, but the main effect is who wins elections, and whether they make better or worse decisions, which seems hard to measure.
My point is that doing these types of calculations is much harder than you seem to realize.
My point is that doing these types of calculations is much harder than you seem to realize.
I do realize that making these calculations is difficult. To be fair, when I first brought this up, I was talking about a completely different subject, in a comment that was already long enough and absolutely did not need a long tangent about the complexities of this added in. Then, I began exploring some of the complexities, hoping that you’d expand on them and you instead chose to view my limited engagement in the topic as a sign that doing these kinds of calculations is harder than I realize. This is frustrating for two reasons. The first reason is that no matter what I said, it would not be possible for me to cover the topic in entirety, especially not in a single message board comment. The second reason is that instead of continuing my discussion and adding to it, you changed the direction of the conversation each of the last two times you replied to me.
It might be that you’d make an excellent conversation partner to explore this with, but I am not certain you are interested in that. Are you interested in exploring this topic or were you just hoping to convince me that I don’t realize how complicated this is?
Then, I began exploring some of the complexities, hoping that you’d expand on them and you instead chose to view my limited engagement in the topic as a sign that doing these kinds of calculations is harder than I realize.
Sorry about that, your examples pattern matched to what someone who wanted to question contemporary practices without actually questioning contemporary ethics would write.
Thanks, Eugine. I can see in hindsight why I would look like that to you, but before hand, I didn’t expect anyone to jump on examples that weren’t elaborated upon to the degree you appear to have been expecting. I’m interested in continuing this discussion for reasons unrelated to the comment that originally spurred this off, as I’ve been thinking a lot lately about how to measure the ethical behavior of humans. I’m still wondering if you’re interested in talking about this. Are you?
One ideal against which we could gauge moral progress without it being useless or self-defeating if taken to the extreme would be “Causing less suffering and death is good.”
I’m afraid once you take even that ideal to the extreme you will get something horrific. An effective way to minimize suffering and death is to minimize things that can experience suffering and death. ie. Taking this ideal to the extreme kills everyone!
Watching what happens when a demigod of “Misguided Good” alignment actually implements this ideal forms the basis of the plot for Summer Knight, where Harry Dresden goes head to head against a powerful Fey who is just too damn sensitive and proactively altruistic for the world’s good.
An effective way to minimize suffering and death is to minimize things that can experience suffering and death. ie. Taking this ideal to the extreme kills everyone!
Um if you didn’t happen to notice, killing everyone qualifies as “death” and is therefore out of bounds for reaching that particular ideal.
Um if you didn’t happen to notice, killing everyone qualifies as “death” and is therefore out of bounds for reaching that particular ideal.
Out of bounds? The ideal in question (“Causing less suffering and death is good”) doesn’t seem to have specified any bounds. That’s precisely the problem with this and indeed most forms of naive idealism. If you go and actually implement the ideal and throw away the far more complex and pragmatic restraints humans actually operate under you end up with something horrible. While all else being equal causing less suffering and death is good, actually optimizing for less suffering and death is a lost purpose.
Almost any optimizer with the goal “cause less suffering and death” that is capable of killing everyone (comparatively) painlesslessly will in fact choose to do so. (Because preventing death forever and is hard and not necessarily possible, depending on the details of physics.)
I was not talking about this in the context of building an optimizer. I was talking about this as a simple way to as humans gauge whether we had made ethical progress or not. I still think your specific concern about my specific ideal was not warranted:
Since killing everyone qualifies as “death” I don’t see how it could possibly qualify as in-bounds as a method for reaching this particular ideal. Phrased differently, for instance as “Suffering and death are bad, let’s eliminate them.” the ideal could certainly lead to that. But I phrased it as “Causing less suffering and death is good.”
I used the wording “cause less” which means the people enacting the ideal would not be able to kill people in order to prevent people from dying. You could argue that if they kill someone that might have had four children, that four deaths were saved—however, I’d argue that the four future deaths were not originally caused by the particular idealist in question, so killing the potential parent of those potential four children would not be a way for that particular person to cause less deaths. They would instead be increasing the number of deaths that they personally caused by one, while reducing the number of deaths that they personally caused by absolutely nothing.
It does not use the word “eliminate” which is important because “eliminate” and “lessen” would result in two totally different strategies. Total elimination practically requires the death of all, as the only way for it to be perfect is for there to be nobody to experience suffering or death. “Lessen” gives you more leeway, by allowing the sort of “as good as possible” type implementation that leaves living things surviving in order to experience the lessened suffering and death.
Can you think of a way for the idealist to kill everyone in order to personally cause less death, without personally causing more death, or a reason why lessening suffering would force the idealist to go to the extreme of total elimination?
I used the wording “cause less” which means the people enacting the ideal would not be able to kill people in order to prevent people from dying.
The wording doesn’t prevent that, but your elaboration here does. You’ve (roughly speaking) replaced a simple consequentlialist moral with a rather complex deontological one. The problems and potential failure modes change accordingly. Neither are an ideal against which I would gauge moral progress.
Would you agree or disagree that no matter what anybody had proposed as a potential way of gauging moral progress, you most likely would have disagreed with it, and there most likely would have been the potential for practically endless debate?
What would be most constructive is to be told “Here is this other ideal against which to gauge progress that would be a better choice.” What I feel like I’m being told, instead, is “This is not perfect.” That is a given, and it’s not useful.
I would earnestly like to know whether humanity has made progress. If you want to have that discussion with me, would you mind contributing to the continuation of the conversation instead of merely kicking the conversation down?
Would you agree or disagree that no matter what anybody had proposed as a potential way of gauging moral progress, you most likely would have disagreed with it
I disagree with that hypothesis. I further note that I evaluate claims about value metrics “if taken to the extreme” differently to proposals advocating a metric to be used for a given purpose. In the latter case I consider whether it will be useful, in the former case I actually consider the extremes. In a forum where issues like lost purpose and the complexity and fragility of value are taken seriously and the consequences of taking simple value systems to the extremes areoftenconsidered this should come as little surprise.
Can you propose an ideal that would work for this purpose?
In a forum where...
Alright, next time I want to talk about something that might involve this kind of ethical statement, if I’m not interested in posting a 400 page essay to clarify my every phoneme and account for every possible contingency, I will say something like “[insert perfect ethical statement here]”.
Edits the comment that started this to prevent further conversation derailment.
I actually consider the extremes...
I haven’t exactly dedicated my existence to composing an ideal useful for gauging human progress against or anything, I just started thinking about this yesterday, but I did consider the extremes.
I still don’t see anything wrong with this one and you didn’t give me a specific objection. I only got a vague “the same problems as...” From my point of view, it’s a bit prickly to imply that I haven’t considered the extremes. Have I considered them as much as you have? Probably not. But if you want me to see why this particular statement is wrong, I hope you realize that you’ll have to give me some specific refutation that reveals it’s uselessness or destructiveness.
I would earnestly like to know whether humanity has made progress. If you want to have that discussion with me, would you mind contributing to the continuation of the conversation instead of merely kicking the conversation down?
This was not responded to at all, and that’s frustrating.
It’s great that you care about this, and I know you have an interest in (and possibly a passion for?) this sort of reasoning, but I’ve been wondering since the comment where you disagreed with me about this in the first place what purpose you are hoping to serve. Lacking direct knowledge of that, all I have is this feeling of being sniped by a “Somebody on the internet is wrong!” reflex.
Regardless of motives, I feel negatively affected by this approach. I’m feeling all existentially angsty now, wondering whether there is any way at all to have any clue whether humanity is moving forward or backward and tracing the cause back to this conversation here where I am the only one trying to build ideas about this and my respondents seem intent on tearing them down.
What I really wanted to get out of this was to get some ideas about how to gather data on ethics progress. Maybe somebody has already constructed an analysis. In that case, a book recommendation or something would have been great. If not, I was looking for additional ideas for going through the available information to get a gist of this. You’ve obviously thought about this, so I figure you must have something worthwhile to contribute toward constructive action here.
I mention this because it does not appear to have occurred to you that maybe I was doing an initial scouting mission, not setting out to solve a philosophical problem once and for all: I don’t need a perfect way to gauge this right this instant—a gist of things and a casual exploration of the scope involved is all that I’m realistically willing to invest time into at the present moment, so that would be satisfactory. I may dive into it later, but, as they say: “baby steps”.
If you have some idea of what ideal could be used to gauge progress against, I would appreciate it if you’d tell me what it is. If not, then is there some way in which you’re interested in continuing the exploration in a constructive manner that does not consist of me building ideas and you tearing them down?
I used the wording “cause less” which means the people enacting the ideal would not be able to kill people in order to prevent people from dying.
The wording doesn’t prevent that, but your elaboration here does. You’ve (roughly speaking) replaced a simple consequentlialist moral with a rather complex deontological one. The problems and potential failure modes change accordingly. Neither are an ideal against which I would gauge moral progress.
My interpretation of the article is not that he’s saying that gauging moral progress is impossible, but that you can’t gauge it by comparing the past with the present. It has to be gauged against an ideal, but the ideal has to be carefully chosen or else the ideal may be useless or self-defeating. I’m sure such an ideal can be constructed (maybe someone has already constructed one that’s widely considered to be acceptable—in that case I’d be interested in finding out about it) but since I’m not aware of a widely accepted ideal against which progress can be gauged at the moment, I’d like to focus on other parts of this. [Edited last statement to remove a source of potential conversational derailment.]
This does inspire me to bring up interesting questions, though, like:
Do I know enough about our past history to know whether it was previously better or worse? What if most native American tribes respected women and gays and abhorred slavery before they were killed off by the settlers? Not to mention the thousands of civilizations that existed prior to these in so many places all over the world.
Might we be causing harm in new ways as well as ceasing to cause harm in other ways, moving backward overall? Even though Americans can’t keep slaves, they do get a lot of their goods from sweatshops. The prejudice against gays may be lessening, but has the prejudice against Middle Easterners increased to the point where it cancels out that progress? Women got the right to vote, but shortly before that, children were forced into the school system. The reason I view this school system as unethical are touched on (in order) here and here.
I wonder if anyone has done thorough research to determine whether we’re moving forward or backward. I would earnestly like to know. It’s a topic I am very interested in. If you have a detailed perspective on this, I’d be interested in reading it.
For archival purposes, the source of potential conversational derailment was:
One ideal against which we could gauge moral progress without it being useless or self-defeating if taken to the extreme would be “Causing less suffering and death is good.”
Well, the most straightforward way to judge success along this metric is to compare the amount of suffering. The problem with this metric is that the contribution of technological progress will dominate any contribution from ethical progress.
Furthermore, it’s not a priori obvious that the contribution to less suffering is what you think it is any of the examples you listed. It’s possible that the people working in “sweatshops” are better off there than wherever they were before, this in fact seems likely since they chose to work there. It’s possible that our modern attitude towards gender roles and sexuality is causing more unhappy marriages and children growing up in bad homes and thus increases suffering; conversely, maybe our attitudes towards gender are correct and our prejudice towards (Muslim) Middle Easterners is encouraging them to adopt it and thus our prejudice is reducing suffering on net. As for the right to vote, well there’s a slight positive effect from making the women feel empowered, but the main effect is who wins elections, and whether they make better or worse decisions, which seems hard to measure.
My point is that doing these types of calculations is much harder than you seem to realize.
Edit: Also, what wedrifid said.
I do realize that making these calculations is difficult. To be fair, when I first brought this up, I was talking about a completely different subject, in a comment that was already long enough and absolutely did not need a long tangent about the complexities of this added in. Then, I began exploring some of the complexities, hoping that you’d expand on them and you instead chose to view my limited engagement in the topic as a sign that doing these kinds of calculations is harder than I realize. This is frustrating for two reasons. The first reason is that no matter what I said, it would not be possible for me to cover the topic in entirety, especially not in a single message board comment. The second reason is that instead of continuing my discussion and adding to it, you changed the direction of the conversation each of the last two times you replied to me.
It might be that you’d make an excellent conversation partner to explore this with, but I am not certain you are interested in that. Are you interested in exploring this topic or were you just hoping to convince me that I don’t realize how complicated this is?
Sorry about that, your examples pattern matched to what someone who wanted to question contemporary practices without actually questioning contemporary ethics would write.
Thanks, Eugine. I can see in hindsight why I would look like that to you, but before hand, I didn’t expect anyone to jump on examples that weren’t elaborated upon to the degree you appear to have been expecting. I’m interested in continuing this discussion for reasons unrelated to the comment that originally spurred this off, as I’ve been thinking a lot lately about how to measure the ethical behavior of humans. I’m still wondering if you’re interested in talking about this. Are you?
I’d be interested, although this should possibly be done in a different thread.
Alright. Choose a location.
I’m afraid once you take even that ideal to the extreme you will get something horrific. An effective way to minimize suffering and death is to minimize things that can experience suffering and death. ie. Taking this ideal to the extreme kills everyone!
Watching what happens when a demigod of “Misguided Good” alignment actually implements this ideal forms the basis of the plot for Summer Knight, where Harry Dresden goes head to head against a powerful Fey who is just too damn sensitive and proactively altruistic for the world’s good.
Um if you didn’t happen to notice, killing everyone qualifies as “death” and is therefore out of bounds for reaching that particular ideal.
Out of bounds? The ideal in question (“Causing less suffering and death is good”) doesn’t seem to have specified any bounds. That’s precisely the problem with this and indeed most forms of naive idealism. If you go and actually implement the ideal and throw away the far more complex and pragmatic restraints humans actually operate under you end up with something horrible. While all else being equal causing less suffering and death is good, actually optimizing for less suffering and death is a lost purpose.
Almost any optimizer with the goal “cause less suffering and death” that is capable of killing everyone (comparatively) painlesslessly will in fact choose to do so. (Because preventing death forever and is hard and not necessarily possible, depending on the details of physics.)
I was not talking about this in the context of building an optimizer. I was talking about this as a simple way to as humans gauge whether we had made ethical progress or not. I still think your specific concern about my specific ideal was not warranted:
Since killing everyone qualifies as “death” I don’t see how it could possibly qualify as in-bounds as a method for reaching this particular ideal. Phrased differently, for instance as “Suffering and death are bad, let’s eliminate them.” the ideal could certainly lead to that. But I phrased it as “Causing less suffering and death is good.”
I used the wording “cause less” which means the people enacting the ideal would not be able to kill people in order to prevent people from dying. You could argue that if they kill someone that might have had four children, that four deaths were saved—however, I’d argue that the four future deaths were not originally caused by the particular idealist in question, so killing the potential parent of those potential four children would not be a way for that particular person to cause less deaths. They would instead be increasing the number of deaths that they personally caused by one, while reducing the number of deaths that they personally caused by absolutely nothing.
It does not use the word “eliminate” which is important because “eliminate” and “lessen” would result in two totally different strategies. Total elimination practically requires the death of all, as the only way for it to be perfect is for there to be nobody to experience suffering or death. “Lessen” gives you more leeway, by allowing the sort of “as good as possible” type implementation that leaves living things surviving in order to experience the lessened suffering and death.
Can you think of a way for the idealist to kill everyone in order to personally cause less death, without personally causing more death, or a reason why lessening suffering would force the idealist to go to the extreme of total elimination?
The wording doesn’t prevent that, but your elaboration here does. You’ve (roughly speaking) replaced a simple consequentlialist moral with a rather complex deontological one. The problems and potential failure modes change accordingly. Neither are an ideal against which I would gauge moral progress.
I’m glad we’re now in the same context.
Would you agree or disagree that no matter what anybody had proposed as a potential way of gauging moral progress, you most likely would have disagreed with it, and there most likely would have been the potential for practically endless debate?
What would be most constructive is to be told “Here is this other ideal against which to gauge progress that would be a better choice.” What I feel like I’m being told, instead, is “This is not perfect.” That is a given, and it’s not useful.
I would earnestly like to know whether humanity has made progress. If you want to have that discussion with me, would you mind contributing to the continuation of the conversation instead of merely kicking the conversation down?
I disagree with that hypothesis. I further note that I evaluate claims about value metrics “if taken to the extreme” differently to proposals advocating a metric to be used for a given purpose. In the latter case I consider whether it will be useful, in the former case I actually consider the extremes. In a forum where issues like lost purpose and the complexity and fragility of value are taken seriously and the consequences of taking simple value systems to the extremes are often considered this should come as little surprise.
Can you propose an ideal that would work for this purpose?
Alright, next time I want to talk about something that might involve this kind of ethical statement, if I’m not interested in posting a 400 page essay to clarify my every phoneme and account for every possible contingency, I will say something like “[insert perfect ethical statement here]”.
Edits the comment that started this to prevent further conversation derailment.
I haven’t exactly dedicated my existence to composing an ideal useful for gauging human progress against or anything, I just started thinking about this yesterday, but I did consider the extremes.
I still don’t see anything wrong with this one and you didn’t give me a specific objection. I only got a vague “the same problems as...” From my point of view, it’s a bit prickly to imply that I haven’t considered the extremes. Have I considered them as much as you have? Probably not. But if you want me to see why this particular statement is wrong, I hope you realize that you’ll have to give me some specific refutation that reveals it’s uselessness or destructiveness.
This was not responded to at all, and that’s frustrating.
It’s great that you care about this, and I know you have an interest in (and possibly a passion for?) this sort of reasoning, but I’ve been wondering since the comment where you disagreed with me about this in the first place what purpose you are hoping to serve. Lacking direct knowledge of that, all I have is this feeling of being sniped by a “Somebody on the internet is wrong!” reflex.
Regardless of motives, I feel negatively affected by this approach. I’m feeling all existentially angsty now, wondering whether there is any way at all to have any clue whether humanity is moving forward or backward and tracing the cause back to this conversation here where I am the only one trying to build ideas about this and my respondents seem intent on tearing them down.
What I really wanted to get out of this was to get some ideas about how to gather data on ethics progress. Maybe somebody has already constructed an analysis. In that case, a book recommendation or something would have been great. If not, I was looking for additional ideas for going through the available information to get a gist of this. You’ve obviously thought about this, so I figure you must have something worthwhile to contribute toward constructive action here.
I mention this because it does not appear to have occurred to you that maybe I was doing an initial scouting mission, not setting out to solve a philosophical problem once and for all: I don’t need a perfect way to gauge this right this instant—a gist of things and a casual exploration of the scope involved is all that I’m realistically willing to invest time into at the present moment, so that would be satisfactory. I may dive into it later, but, as they say: “baby steps”.
If you have some idea of what ideal could be used to gauge progress against, I would appreciate it if you’d tell me what it is. If not, then is there some way in which you’re interested in continuing the exploration in a constructive manner that does not consist of me building ideas and you tearing them down?
The wording doesn’t prevent that, but your elaboration here does. You’ve (roughly speaking) replaced a simple consequentlialist moral with a rather complex deontological one. The problems and potential failure modes change accordingly. Neither are an ideal against which I would gauge moral progress.