(part 2 of two-part response, see below or above for the first)
THat seems to be saying that it is instrumentally in people’s interests to be moral. But if that were always straightforwardly the case, then there would be no issues of sacrifices and self-restraint involve in morality, which is scarcely credible.
The key point is that “morality” isn’t straightforwardly “what people want” at all. What people consider moral when they evaluate all the information available to them and what people actually do (even with that information available) are often completely different things.
Note also that context and complicated conditionals become involved in Real Issues™. To throw out a toy example:
Julie might find it moral to kill three humans because she values the author of this post saying “Shenanigans” out loud only a bit less than their lives, and the author has committed to saying it three times out loud for each imaginary person dead in this toy example. However, Jack doesn’t want those humans dead, and has credibly signaled that he will be miserable forever if those three people die. Jack also doesn’t care about me saying “Shenanigans”.
Thus, because Julie cares about Jack’s morality (most humans, I assume, have values in their morality for “what other people of my tribe consider moral or wrong”), she will “make a personal sacrifice and use self-restrain” to not kill the three nameless, fortunate toy humans. The naive run of her morality over the immediate results says “Bah! Things could have been more fun.”, but game-theoretically she gains an advantage in the long term—Jack now cooperates with her, which means she incurs far less losses overall and still gains some value from her own people-alive moral counter and from Jack’s people-alive moral counter as well.
If I lay down my life for my country, that might lead to the greater good, but how good is it for me? The issue is much more complex than you have stated.
I think you are vastly confusing “good”, “greater good”, and “good for me”. These need to be tabooed and reduced. Again, example time:
Tom the toy soldier cares about his life. Tom cares about the lives of his comrades. Tom cares about the continuation of the social system that can be summarized as “his country”.
If Tom dies without any reason or effect, this is clearly bad. However, Tom values the end of his country as 1⁄2 of his life. So far, he’s still not going to die for it. Tom also values each comrade life at 1/10th of his life. Still not going to die for his country. Tom also knows that the end of his country means 95% chance that 200 of his comrades will die, with the other 5% they all live. If the country does not end, there’s a 50% chance that 100 of his comrades will die anyway, with 50% they live.
If Tom lives, there is 95% chance (as far as Tom knows, to his evidence, etc. etc.) that the country will end. If Tom sacrifices himself, the country is saved (with “certainty”, usual disclaimers etc. etc.).
So if Tom lives, Tom’s values go to −1/2 plus .95 chance of .95 chance of −20. If Tom sacrifices himself, the currently-alive Tom values this at −1 plus .5 chance of −10. Values are in negative utility only for simplicity of calculation, but this could be described at length in any other system you want (with a bit more effort though).
So the expected utility comes out at −18.55 if Tom lives, and −6 if Tom sacrifices himself, since Tom is a magical toy human and isn’t biased in any way and always shuts up and calculates and always knows exactly his own morality. So knowing all of this, Tom lays down his life for his country and what he would think of as “the greater good”.
I really don’t see how I’ve excluded this or somehow claimed that all of this was magically whisked away by any of what I said.
Overall, I think the only substantive disagreement we had is in your assessment that I didn’t think of / say anything useful towards solving interpersonal moral conflicts (I’m pretty sure I did, but mostly implicitly). I think the issue of what “morality” is for is entirely an empty word problem and should be ignored.
I’ll gladly attempt to reduce or taboo for reasonable requests to do so. If you think there are other issues we disagree on, I’d like them to be said. However, I would much appreciate efforts to avoid logical rudeness, and would also greatly appreciate if in further responses you (or anyone else replying) assumed that I haven’t thought through this only at the single-tier, naive level without giving this much more than five minutes of thought.
Or, to rephrase positively: Please assume you’re speaking to someone who has thought of most of the obvious implications, has thought about this for a very considerable amount of time, has done some careful research, and thinks that this all adds up to normality.
So knowing all of this, Tom lays down his life for his country and what he would think of as “the greater good”.
Tom will sacrifice himself if his values lead him too, and not if they don’t. He might desert or turn traitor. You would
still call that all moral because it is an output of the neurological module you have labelled “moral”.
I think the issue of what “morality” is for is entirely an empty word problem and should be ignored.
I think it isn’t. If someone tries to persuade you that you are wrong about morality, it is useful to consider the “what is morality for” question.
and thinks that this all adds up to normality.
Do you think any of this adds up to any extent of a solution to the philosphical problems of morality/ethics?
Tom will sacrifice himself if his values lead him too, and not if they don’t. He might desert or turn traitor. You would still call that all moral because it is an output of the neurological module you have labelled “moral”.
Yes!
...
.
(this space intentionally left blank)
.
.
Do you think any of this adds up to any extent of a solution to the philosphical problems of morality/ethics?
What specific philosophical problems? Because yes, it does help me clarify my thoughts and figure out better methods of arriving at solutions.
Does it directly provide solutions to some as-yet-unstated philosophical problems? Well, probably not, since the search space of possible philosophical problems related to morality or ethics is pretty, well, huge. The odds that my current writings provide a direct solution to any given random one of them are pretty low.
If the question is whether or not my current belief network contains answers to all philosophical problems pertaining to morality and ethics, then a resounding no. Is it flabbergasted by many of the debates and many of the questions still being asked, and does it consider many of them mysterious and pointless? A resounding yes.
Consequentualism versus deontology, objectivism versus subjectivism, as in the context.
Oh. Yep.
As I said originally, both of those “X versus Y” and many others are just confusing and mysterious-sounding to me.
They seem like the difference between Car.Accelerate() and AccelerateObject(Car) in programming. Different implementations, some slightly more efficient for some circumstances than others, and both executing the same effective algorithm—the car object goes faster.
Any would be good Metaethics is sometimes touted as a solve problem on LW.
Oh. Well, yeah, it does sound kind-of solved.
Judging by the wikipedia description of “meta-ethics” and the examples it gives, I find the meta-ethics sequence on LW gives me more than satisfactory answers to all of those questions.
(part 2 of two-part response, see below or above for the first)
See this later comment but this one especially (the first is mostly for context) to see that I do indeed take that into account.
The key point is that “morality” isn’t straightforwardly “what people want” at all. What people consider moral when they evaluate all the information available to them and what people actually do (even with that information available) are often completely different things.
Note also that context and complicated conditionals become involved in Real Issues™. To throw out a toy example:
Julie might find it moral to kill three humans because she values the author of this post saying “Shenanigans” out loud only a bit less than their lives, and the author has committed to saying it three times out loud for each imaginary person dead in this toy example. However, Jack doesn’t want those humans dead, and has credibly signaled that he will be miserable forever if those three people die. Jack also doesn’t care about me saying “Shenanigans”.
Thus, because Julie cares about Jack’s morality (most humans, I assume, have values in their morality for “what other people of my tribe consider moral or wrong”), she will “make a personal sacrifice and use self-restrain” to not kill the three nameless, fortunate toy humans. The naive run of her morality over the immediate results says “Bah! Things could have been more fun.”, but game-theoretically she gains an advantage in the long term—Jack now cooperates with her, which means she incurs far less losses overall and still gains some value from her own people-alive moral counter and from Jack’s people-alive moral counter as well.
I think you are vastly confusing “good”, “greater good”, and “good for me”. These need to be tabooed and reduced. Again, example time:
Tom the toy soldier cares about his life. Tom cares about the lives of his comrades. Tom cares about the continuation of the social system that can be summarized as “his country”.
If Tom dies without any reason or effect, this is clearly bad. However, Tom values the end of his country as 1⁄2 of his life. So far, he’s still not going to die for it. Tom also values each comrade life at 1/10th of his life. Still not going to die for his country. Tom also knows that the end of his country means 95% chance that 200 of his comrades will die, with the other 5% they all live. If the country does not end, there’s a 50% chance that 100 of his comrades will die anyway, with 50% they live.
If Tom lives, there is 95% chance (as far as Tom knows, to his evidence, etc. etc.) that the country will end. If Tom sacrifices himself, the country is saved (with “certainty”, usual disclaimers etc. etc.).
So if Tom lives, Tom’s values go to −1/2 plus .95 chance of .95 chance of −20. If Tom sacrifices himself, the currently-alive Tom values this at −1 plus .5 chance of −10. Values are in negative utility only for simplicity of calculation, but this could be described at length in any other system you want (with a bit more effort though).
So the expected utility comes out at −18.55 if Tom lives, and −6 if Tom sacrifices himself, since Tom is a magical toy human and isn’t biased in any way and always shuts up and calculates and always knows exactly his own morality. So knowing all of this, Tom lays down his life for his country and what he would think of as “the greater good”.
I really don’t see how I’ve excluded this or somehow claimed that all of this was magically whisked away by any of what I said.
Overall, I think the only substantive disagreement we had is in your assessment that I didn’t think of / say anything useful towards solving interpersonal moral conflicts (I’m pretty sure I did, but mostly implicitly). I think the issue of what “morality” is for is entirely an empty word problem and should be ignored.
I’ll gladly attempt to reduce or taboo for reasonable requests to do so. If you think there are other issues we disagree on, I’d like them to be said. However, I would much appreciate efforts to avoid logical rudeness, and would also greatly appreciate if in further responses you (or anyone else replying) assumed that I haven’t thought through this only at the single-tier, naive level without giving this much more than five minutes of thought.
Or, to rephrase positively: Please assume you’re speaking to someone who has thought of most of the obvious implications, has thought about this for a very considerable amount of time, has done some careful research, and thinks that this all adds up to normality.
Tom will sacrifice himself if his values lead him too, and not if they don’t. He might desert or turn traitor. You would still call that all moral because it is an output of the neurological module you have labelled “moral”.
I think it isn’t. If someone tries to persuade you that you are wrong about morality, it is useful to consider the “what is morality for” question.
Do you think any of this adds up to any extent of a solution to the philosphical problems of morality/ethics?
Yes!
.
(this space intentionally left blank)
.
.
What specific philosophical problems? Because yes, it does help me clarify my thoughts and figure out better methods of arriving at solutions.
Does it directly provide solutions to some as-yet-unstated philosophical problems? Well, probably not, since the search space of possible philosophical problems related to morality or ethics is pretty, well, huge. The odds that my current writings provide a direct solution to any given random one of them are pretty low.
If the question is whether or not my current belief network contains answers to all philosophical problems pertaining to morality and ethics, then a resounding no. Is it flabbergasted by many of the debates and many of the questions still being asked, and does it consider many of them mysterious and pointless? A resounding yes.
Consequentualism versus deontology, objectivism versus subjectivism, as in the context.
Any would be good Metaethics is sometimes touted as a solve problem on LW.
Oh. Yep.
As I said originally, both of those “X versus Y” and many others are just confusing and mysterious-sounding to me.
They seem like the difference between Car.Accelerate() and AccelerateObject(Car) in programming. Different implementations, some slightly more efficient for some circumstances than others, and both executing the same effective algorithm—the car object goes faster.
Oh. Well, yeah, it does sound kind-of solved.
Judging by the wikipedia description of “meta-ethics” and the examples it gives, I find the meta-ethics sequence on LW gives me more than satisfactory answers to all of those questions.
You previously said something much more definite-sounding:
“I believe that there is an objective system of verifiable, moral facts which can be true or false”
..although it has turned out you meant something like “there are objective facts about de facto moral reasoning”.
The alleged solution seems as elusive as the Snark to me.