The ultimate definition would tell me why to care.
In the space of all possible meta-ethics, some meta-ethics are cooperative, and other meta-ethics are not so. This means that if you can choose which metaethics to spread to society, you stand a better chance at your own goals, if you spread cooperative metaethics. And cooperative metaethics is what we call “morality”, by and large.
It’s “Do unto others...”, but abstracted a bit, so that we really mean “Use the reasoning to determine what to do unto others, that you would rather they used when deciding how to do unto you.”
Omega puts you in a room with a big red button.
“Press this button and you get ten dollars but another person will be poisoned to slowly die. If you don’t press it I punch you on the nose and you get no money.
They have a similar button which they can use to kill you and get 10 dollars. You can’t communicate with them. In fact they think they’re the only person being given the option of a button, so this problem isn’t exactly like Prisoner’s dilemma. They don’t even know you exist or that their own life is at stake.”
“But here’s the offer I’m making just to you, not them. I can imprint you both with the decision theory of your choice, Amanojack; ofcourse if you identify yourself in your decision theory, they’ll be identifying themself.
“Careful though: This is a one time offer, and then I may put both of you to further different tests. So choose the decision theory that you want both of you to have, and make it abstract enough to help you survive, regardless of specific circumstances.”
Given the above scenario, you’ll end up wanting people to choose protecting the life of strangers more than than picking 10 dollars.
Not quite my point. I’m not talking about what your preferences would be. That would be subjective, personal. I’m talking about what everyone’s meta-ethical preferences would be, if self-consistent, and abstracted enough.
My argument is essentially that objective morality can be considered the position in meta-ethical-space which if occupied by all agents would lead to the maximization of utility.
That makes it objectively (because it refers to all the agents, not some of them, or one of them) different from other points in meta-ethical-space, and so it can be considered to lead to an objectively better morality.
Yeah, because calling it that makes it pretty hard to understand. If you just mean Collective Greatest Happiness Utilitarianism, then that would be a good name. Objective morality can mean way too many different things. This way at least you’re saying in what sense it’s supposed to be objective.
As for this collectivism, though, I don’t go for it. There is no way to know another’s utility function, no way to compare utility functions among people, etc. other than subjectively. And who’s going to be the person or group that decides? SIAI? I personally think all this collectivism is a carryover from the idea of (collective) democracy and other silly ideas. But that’s a debate for another day.
I’m getting a bad vibe here, and no longer feel we’re having the same conversation
“Person or group that decides”? Who said anything about anyone deciding anything? And my point was that this perhaps this is the meta-ethical position that every rational agent individually converges to. So nobody “decides”, or everyone does. And if they don’t reach the same decision, then there’s no single objective morality—but even i
so perhaps there’s a limited set of coherent metaethical positions, like two or three of them.
I personally think all this collectivism is a carryover from the idea of (collective) democracy and other silly ideas.
I think my post was inspired more by TDT solutions to Prisoner’s dilemma and Newcomb’s box, a decision theory that takes into account the copies/simulations of its own self, or other problems that involve humans getting copied and needing to make a decision in blind coordination with their copies.
I imagined system that are not wholly copied, but rather just the module that determines the meta-ethical constraints, and tried to figure out to which directions would such system try to modify themselves, in the knowledge that other such system would similarly modify themselves.
You’re right, I think I’m confused about what you were talking about, or I inferred too much. I’m not really following at this point either.
One thing, though, is that you’re using meta-ethics to mean ethics. Meta-ethics is basically the study of what people mean by moral language, like whether ought is interpreted as a command, as God’s will, as a way to get along with others, etc. That’ll tend to cause some confusion. A good heuristic is, “Ethics is about what people ought to do, whereas meta-ethics is about what ought means (or what people intend by it).”
One thing, though, is that you’re using meta-ethics to mean ethics.
I’m not.
An ethic may say:
I should support same-sex marriage. (SSM-YES) or perhaps:
I should oppose same-sex marraige (SSM-NO)
The reason for this position is the meta-ethic: e.g.
Because I should act to increase average utility. (UTIL-AVERAGE)
Because I should act to increase total utility. (UTIL-TOTAL)
Because I should act to increase total amount of freedom (FREEDOM-GOOD)
Because I should act to increase average societal happiness. (SOCIETAL-HAPPYGOOD-AVERAGE)
Because I should obey the will of our voters (DEMOCRACY-GOOD)
Because I should do what God commands. (OBEY-GOD).
But some metaethical positions are invalid because of false assumptions (e.g. God’s existence). Other positions may not be abstract enough that they could possibly become universal or apply to all situations. Some combinations of ethics and metaethics may be the result of other factual or reasoning mistakes (e.g. someone thinks SSM will harm society, but it ends up helping it, even by the person’s own measuring).
So, NO, I don’t speak necessarily about Collective Greatest Happiness Utilitarianism. I’m NOT talking about a specific metaethic, not even necessarily a consequentialistic metaethic (let alone a “Greatest happiness utilitarianism”) I’m speaking about the hypothetical point in metaethical space that everyone would hypothetically prefer everyone to have—an Attractor of metaethical positions.
As for this collectivism, though, I don’t go for it. There is no way to know another’s utility function, no way to compare utility functions among people, etc. other than subjectively.
That’s very contestable. It has frequently argued here that preferences can be inferred from behaviour; it’s also been argued that introspection (if that is what you mean by “subjectively”) is not a reliable guide to motivation.
This is the whole demonstrated preference thing. I don’t buy it myself, but that’s a debate for another time. What I mean by subjectively is that I will value one person’s life more than another person’s life, or I could think that I want that $1,000,000 more than a rich person wants it, but that’s just all in my head. To compare utility functions and work from demonstrated preference usually—not always—is a precursor to some kind of authoritarian scheme. I can’t say there is anything like that coming, but it does set off some alarm bells. Anyway, this is not something I can substantiate right now.
Attempts to reduce real, altrusitic ethics back down to selfish/instrumental ethics tend not to work that well, because the gains from co-operation are remote, and there
are many realistic instances where selfish action produces immediate rewards (cd the Prudent Predatory objection Rand’s egoistic ethics).
OTOH, since many people are selfish, they are made to care by having legal and social sanctions against excessively selfish behaviour.
Attempts to reduce real, altrusitic ethics back down to selfish/instrumental ethics tend not to work that well,
I wasn’t talking about altruistic ethics, which can lead someone to sacrifice their lifes to prevent someone else getting a bruise; and thus would be almost as disastrous as selfishness if widespread. I was talking about cooperative ethics—which overlaps with but doesn’t equal altruism, same as it overlaps but doesn’t equal selfishness.
The difference between morality and immorality, is that morality can at its most abstract possible level be cooperative, and immorality can’t.
This by itself isn’t a reason that can force someone to care—you can’t make a rock care about anything, but that’s not a problem with your argument. But it’s something that leads to different expectations about the world, namely what Amanojack was asking for.
In a world populated by beings whose beliefs approach objective morality, I expect more cooperation and mutual well-being, all other things being equal. In a world whose beliefs don’t approach it, i expect more war and other devastation.
I wasn’t talking about altruistic ethics, which can lead someone to sacrifice their lifes to prevent someone else getting a bruise;
Although it usually doesn’t.
and thus would be almost as disastrous as selfishness if widespread. I was talking about cooperative ethics—which overlaps with but doesn’t equal altruism, same as it overlaps but doesn’t equal selfishness.
I think that you version of altruism is a straw man, and that what most people
mean by altruism isn’t very different from co operation.
The difference between morality and immorality, is that morality can at its most abstract possible level be cooperative, and immorality can’t.
Or, as I call it, universalisability.
But it’s something that leads to different expectations about the world, namely what Amanojack was asking for.
That argument doesn’t have to be made at all. Morality can stand as a refutation of the claim that anticipiation of experience is of ultimate importance. And it can be made differently: if you rejig your values, you can expect to antipate different experiences—it can be a self-fulffilling prophecy and not merely passive anticipation.
In a world populated by beings whose beliefs approach objective morality, I expect more cooperation and mutual well-being, all other things being equal. In a world whose beliefs don’t approach it, i expect more war and other devastation.
There is an argument from self interest, but it is tertiary to the two arguments I mentioned above.
In the space of all possible meta-ethics, some meta-ethics are cooperative, and other meta-ethics are not so. This means that if you can choose which metaethics to spread to society, you stand a better chance at your own goals, if you spread cooperative metaethics. And cooperative metaethics is what we call “morality”, by and large.
It’s “Do unto others...”, but abstracted a bit, so that we really mean “Use the reasoning to determine what to do unto others, that you would rather they used when deciding how to do unto you.”
Omega puts you in a room with a big red button. “Press this button and you get ten dollars but another person will be poisoned to slowly die. If you don’t press it I punch you on the nose and you get no money. They have a similar button which they can use to kill you and get 10 dollars. You can’t communicate with them. In fact they think they’re the only person being given the option of a button, so this problem isn’t exactly like Prisoner’s dilemma. They don’t even know you exist or that their own life is at stake.”
“But here’s the offer I’m making just to you, not them. I can imprint you both with the decision theory of your choice, Amanojack; ofcourse if you identify yourself in your decision theory, they’ll be identifying themself.
“Careful though: This is a one time offer, and then I may put both of you to further different tests. So choose the decision theory that you want both of you to have, and make it abstract enough to help you survive, regardless of specific circumstances.”
Given the above scenario, you’ll end up wanting people to choose protecting the life of strangers more than than picking 10 dollars.
I would indeed it prefer if other people had certain moral sentiments. I don’t think I ever suggested otherwise.
Not quite my point. I’m not talking about what your preferences would be. That would be subjective, personal. I’m talking about what everyone’s meta-ethical preferences would be, if self-consistent, and abstracted enough.
My argument is essentially that objective morality can be considered the position in meta-ethical-space which if occupied by all agents would lead to the maximization of utility.
That makes it objectively (because it refers to all the agents, not some of them, or one of them) different from other points in meta-ethical-space, and so it can be considered to lead to an objectively better morality.
Then why not just call it “universal morality”?
It’s called that too. Are you just objecting as to what we are calling it?
Yeah, because calling it that makes it pretty hard to understand. If you just mean Collective Greatest Happiness Utilitarianism, then that would be a good name. Objective morality can mean way too many different things. This way at least you’re saying in what sense it’s supposed to be objective.
As for this collectivism, though, I don’t go for it. There is no way to know another’s utility function, no way to compare utility functions among people, etc. other than subjectively. And who’s going to be the person or group that decides? SIAI? I personally think all this collectivism is a carryover from the idea of (collective) democracy and other silly ideas. But that’s a debate for another day.
I’m getting a bad vibe here, and no longer feel we’re having the same conversation
“Person or group that decides”? Who said anything about anyone deciding anything? And my point was that this perhaps this is the meta-ethical position that every rational agent individually converges to. So nobody “decides”, or everyone does. And if they don’t reach the same decision, then there’s no single objective morality—but even i so perhaps there’s a limited set of coherent metaethical positions, like two or three of them.
I think my post was inspired more by TDT solutions to Prisoner’s dilemma and Newcomb’s box, a decision theory that takes into account the copies/simulations of its own self, or other problems that involve humans getting copied and needing to make a decision in blind coordination with their copies.
I imagined system that are not wholly copied, but rather just the module that determines the meta-ethical constraints, and tried to figure out to which directions would such system try to modify themselves, in the knowledge that other such system would similarly modify themselves.
You’re right, I think I’m confused about what you were talking about, or I inferred too much. I’m not really following at this point either.
One thing, though, is that you’re using meta-ethics to mean ethics. Meta-ethics is basically the study of what people mean by moral language, like whether ought is interpreted as a command, as God’s will, as a way to get along with others, etc. That’ll tend to cause some confusion. A good heuristic is, “Ethics is about what people ought to do, whereas meta-ethics is about what ought means (or what people intend by it).”
I’m not.
An ethic may say:
I should support same-sex marriage. (SSM-YES)
or perhaps:
I should oppose same-sex marraige (SSM-NO)
The reason for this position is the meta-ethic:
e.g.
Because I should act to increase average utility. (UTIL-AVERAGE)
Because I should act to increase total utility. (UTIL-TOTAL)
Because I should act to increase total amount of freedom (FREEDOM-GOOD)
Because I should act to increase average societal happiness. (SOCIETAL-HAPPYGOOD-AVERAGE)
Because I should obey the will of our voters (DEMOCRACY-GOOD)
Because I should do what God commands. (OBEY-GOD).
But some metaethical positions are invalid because of false assumptions (e.g. God’s existence). Other positions may not be abstract enough that they could possibly become universal or apply to all situations. Some combinations of ethics and metaethics may be the result of other factual or reasoning mistakes (e.g. someone thinks SSM will harm society, but it ends up helping it, even by the person’s own measuring).
So, NO, I don’t speak necessarily about Collective Greatest Happiness Utilitarianism. I’m NOT talking about a specific metaethic, not even necessarily a consequentialistic metaethic (let alone a “Greatest happiness utilitarianism”) I’m speaking about the hypothetical point in metaethical space that everyone would hypothetically prefer everyone to have—an Attractor of metaethical positions.
That’s very contestable. It has frequently argued here that preferences can be inferred from behaviour; it’s also been argued that introspection (if that is what you mean by “subjectively”) is not a reliable guide to motivation.
This is the whole demonstrated preference thing. I don’t buy it myself, but that’s a debate for another time. What I mean by subjectively is that I will value one person’s life more than another person’s life, or I could think that I want that $1,000,000 more than a rich person wants it, but that’s just all in my head. To compare utility functions and work from demonstrated preference usually—not always—is a precursor to some kind of authoritarian scheme. I can’t say there is anything like that coming, but it does set off some alarm bells. Anyway, this is not something I can substantiate right now.
Attempts to reduce real, altrusitic ethics back down to selfish/instrumental ethics tend not to work that well, because the gains from co-operation are remote, and there are many realistic instances where selfish action produces immediate rewards (cd the Prudent Predatory objection Rand’s egoistic ethics).
OTOH, since many people are selfish, they are made to care by having legal and social sanctions against excessively selfish behaviour.
I wasn’t talking about altruistic ethics, which can lead someone to sacrifice their lifes to prevent someone else getting a bruise; and thus would be almost as disastrous as selfishness if widespread. I was talking about cooperative ethics—which overlaps with but doesn’t equal altruism, same as it overlaps but doesn’t equal selfishness.
The difference between morality and immorality, is that morality can at its most abstract possible level be cooperative, and immorality can’t.
This by itself isn’t a reason that can force someone to care—you can’t make a rock care about anything, but that’s not a problem with your argument. But it’s something that leads to different expectations about the world, namely what Amanojack was asking for.
In a world populated by beings whose beliefs approach objective morality, I expect more cooperation and mutual well-being, all other things being equal. In a world whose beliefs don’t approach it, i expect more war and other devastation.
Although it usually doesn’t.
I think that you version of altruism is a straw man, and that what most people mean by altruism isn’t very different from co operation.
Or, as I call it, universalisability.
That argument doesn’t have to be made at all. Morality can stand as a refutation of the claim that anticipiation of experience is of ultimate importance. And it can be made differently: if you rejig your values, you can expect to antipate different experiences—it can be a self-fulffilling prophecy and not merely passive anticipation.
There is an argument from self interest, but it is tertiary to the two arguments I mentioned above.