Not quite my point. I’m not talking about what your preferences would be. That would be subjective, personal. I’m talking about what everyone’s meta-ethical preferences would be, if self-consistent, and abstracted enough.
My argument is essentially that objective morality can be considered the position in meta-ethical-space which if occupied by all agents would lead to the maximization of utility.
That makes it objectively (because it refers to all the agents, not some of them, or one of them) different from other points in meta-ethical-space, and so it can be considered to lead to an objectively better morality.
Yeah, because calling it that makes it pretty hard to understand. If you just mean Collective Greatest Happiness Utilitarianism, then that would be a good name. Objective morality can mean way too many different things. This way at least you’re saying in what sense it’s supposed to be objective.
As for this collectivism, though, I don’t go for it. There is no way to know another’s utility function, no way to compare utility functions among people, etc. other than subjectively. And who’s going to be the person or group that decides? SIAI? I personally think all this collectivism is a carryover from the idea of (collective) democracy and other silly ideas. But that’s a debate for another day.
I’m getting a bad vibe here, and no longer feel we’re having the same conversation
“Person or group that decides”? Who said anything about anyone deciding anything? And my point was that this perhaps this is the meta-ethical position that every rational agent individually converges to. So nobody “decides”, or everyone does. And if they don’t reach the same decision, then there’s no single objective morality—but even i
so perhaps there’s a limited set of coherent metaethical positions, like two or three of them.
I personally think all this collectivism is a carryover from the idea of (collective) democracy and other silly ideas.
I think my post was inspired more by TDT solutions to Prisoner’s dilemma and Newcomb’s box, a decision theory that takes into account the copies/simulations of its own self, or other problems that involve humans getting copied and needing to make a decision in blind coordination with their copies.
I imagined system that are not wholly copied, but rather just the module that determines the meta-ethical constraints, and tried to figure out to which directions would such system try to modify themselves, in the knowledge that other such system would similarly modify themselves.
You’re right, I think I’m confused about what you were talking about, or I inferred too much. I’m not really following at this point either.
One thing, though, is that you’re using meta-ethics to mean ethics. Meta-ethics is basically the study of what people mean by moral language, like whether ought is interpreted as a command, as God’s will, as a way to get along with others, etc. That’ll tend to cause some confusion. A good heuristic is, “Ethics is about what people ought to do, whereas meta-ethics is about what ought means (or what people intend by it).”
One thing, though, is that you’re using meta-ethics to mean ethics.
I’m not.
An ethic may say:
I should support same-sex marriage. (SSM-YES) or perhaps:
I should oppose same-sex marraige (SSM-NO)
The reason for this position is the meta-ethic: e.g.
Because I should act to increase average utility. (UTIL-AVERAGE)
Because I should act to increase total utility. (UTIL-TOTAL)
Because I should act to increase total amount of freedom (FREEDOM-GOOD)
Because I should act to increase average societal happiness. (SOCIETAL-HAPPYGOOD-AVERAGE)
Because I should obey the will of our voters (DEMOCRACY-GOOD)
Because I should do what God commands. (OBEY-GOD).
But some metaethical positions are invalid because of false assumptions (e.g. God’s existence). Other positions may not be abstract enough that they could possibly become universal or apply to all situations. Some combinations of ethics and metaethics may be the result of other factual or reasoning mistakes (e.g. someone thinks SSM will harm society, but it ends up helping it, even by the person’s own measuring).
So, NO, I don’t speak necessarily about Collective Greatest Happiness Utilitarianism. I’m NOT talking about a specific metaethic, not even necessarily a consequentialistic metaethic (let alone a “Greatest happiness utilitarianism”) I’m speaking about the hypothetical point in metaethical space that everyone would hypothetically prefer everyone to have—an Attractor of metaethical positions.
As for this collectivism, though, I don’t go for it. There is no way to know another’s utility function, no way to compare utility functions among people, etc. other than subjectively.
That’s very contestable. It has frequently argued here that preferences can be inferred from behaviour; it’s also been argued that introspection (if that is what you mean by “subjectively”) is not a reliable guide to motivation.
This is the whole demonstrated preference thing. I don’t buy it myself, but that’s a debate for another time. What I mean by subjectively is that I will value one person’s life more than another person’s life, or I could think that I want that $1,000,000 more than a rich person wants it, but that’s just all in my head. To compare utility functions and work from demonstrated preference usually—not always—is a precursor to some kind of authoritarian scheme. I can’t say there is anything like that coming, but it does set off some alarm bells. Anyway, this is not something I can substantiate right now.
I would indeed it prefer if other people had certain moral sentiments. I don’t think I ever suggested otherwise.
Not quite my point. I’m not talking about what your preferences would be. That would be subjective, personal. I’m talking about what everyone’s meta-ethical preferences would be, if self-consistent, and abstracted enough.
My argument is essentially that objective morality can be considered the position in meta-ethical-space which if occupied by all agents would lead to the maximization of utility.
That makes it objectively (because it refers to all the agents, not some of them, or one of them) different from other points in meta-ethical-space, and so it can be considered to lead to an objectively better morality.
Then why not just call it “universal morality”?
It’s called that too. Are you just objecting as to what we are calling it?
Yeah, because calling it that makes it pretty hard to understand. If you just mean Collective Greatest Happiness Utilitarianism, then that would be a good name. Objective morality can mean way too many different things. This way at least you’re saying in what sense it’s supposed to be objective.
As for this collectivism, though, I don’t go for it. There is no way to know another’s utility function, no way to compare utility functions among people, etc. other than subjectively. And who’s going to be the person or group that decides? SIAI? I personally think all this collectivism is a carryover from the idea of (collective) democracy and other silly ideas. But that’s a debate for another day.
I’m getting a bad vibe here, and no longer feel we’re having the same conversation
“Person or group that decides”? Who said anything about anyone deciding anything? And my point was that this perhaps this is the meta-ethical position that every rational agent individually converges to. So nobody “decides”, or everyone does. And if they don’t reach the same decision, then there’s no single objective morality—but even i so perhaps there’s a limited set of coherent metaethical positions, like two or three of them.
I think my post was inspired more by TDT solutions to Prisoner’s dilemma and Newcomb’s box, a decision theory that takes into account the copies/simulations of its own self, or other problems that involve humans getting copied and needing to make a decision in blind coordination with their copies.
I imagined system that are not wholly copied, but rather just the module that determines the meta-ethical constraints, and tried to figure out to which directions would such system try to modify themselves, in the knowledge that other such system would similarly modify themselves.
You’re right, I think I’m confused about what you were talking about, or I inferred too much. I’m not really following at this point either.
One thing, though, is that you’re using meta-ethics to mean ethics. Meta-ethics is basically the study of what people mean by moral language, like whether ought is interpreted as a command, as God’s will, as a way to get along with others, etc. That’ll tend to cause some confusion. A good heuristic is, “Ethics is about what people ought to do, whereas meta-ethics is about what ought means (or what people intend by it).”
I’m not.
An ethic may say:
I should support same-sex marriage. (SSM-YES)
or perhaps:
I should oppose same-sex marraige (SSM-NO)
The reason for this position is the meta-ethic:
e.g.
Because I should act to increase average utility. (UTIL-AVERAGE)
Because I should act to increase total utility. (UTIL-TOTAL)
Because I should act to increase total amount of freedom (FREEDOM-GOOD)
Because I should act to increase average societal happiness. (SOCIETAL-HAPPYGOOD-AVERAGE)
Because I should obey the will of our voters (DEMOCRACY-GOOD)
Because I should do what God commands. (OBEY-GOD).
But some metaethical positions are invalid because of false assumptions (e.g. God’s existence). Other positions may not be abstract enough that they could possibly become universal or apply to all situations. Some combinations of ethics and metaethics may be the result of other factual or reasoning mistakes (e.g. someone thinks SSM will harm society, but it ends up helping it, even by the person’s own measuring).
So, NO, I don’t speak necessarily about Collective Greatest Happiness Utilitarianism. I’m NOT talking about a specific metaethic, not even necessarily a consequentialistic metaethic (let alone a “Greatest happiness utilitarianism”) I’m speaking about the hypothetical point in metaethical space that everyone would hypothetically prefer everyone to have—an Attractor of metaethical positions.
That’s very contestable. It has frequently argued here that preferences can be inferred from behaviour; it’s also been argued that introspection (if that is what you mean by “subjectively”) is not a reliable guide to motivation.
This is the whole demonstrated preference thing. I don’t buy it myself, but that’s a debate for another time. What I mean by subjectively is that I will value one person’s life more than another person’s life, or I could think that I want that $1,000,000 more than a rich person wants it, but that’s just all in my head. To compare utility functions and work from demonstrated preference usually—not always—is a precursor to some kind of authoritarian scheme. I can’t say there is anything like that coming, but it does set off some alarm bells. Anyway, this is not something I can substantiate right now.