Yes, but the fact that the universe itself seems to adhere to the logical systems by which we construct mathematics gives credence to the idea that the logical systems are fundamental, something we’ve discovered rather than producing. We judge claims about nonobserved mathematical constructs like transfinites according to those systems,
But claims about transfinities don’t correspond directly to any object. Maths is “spun off” from other
facts, on your view. So, by analogy, moral realism could be “spun off” without needing any Form of the Good to correspond to goodness.
Metaethical systems usually have axioms like “Maximising utility is good”.
But utility is a function of values. A paperclipper will produce utility according to different values than a human.
You seem to be assumig that morality is about individual behaviour. A moral realist system like utiitarianism operates at the group level, and woud take paperclipper values into account along with all others. Utilitarianism doens’t care what values are, it just sums or averages them.
Or perhaps you are making the objection that an entity woud need moral values to care about the preferences of others in the first place. That is addressed by, another kind of realism, the rationality-based kind, which
starts from noting that rational agents have to have some value in common, because they are all rational.
Why would most rational minds converge on values?
a) they don’t have to converge on preferences, since thing like utilitariansim are preference-neutral.
b) they already have to some extent because they are rational
Most human minds converge on some values, but we share almost all our evolutionary history and brain structure. The fact that most humans converge on certain values is no more indicative of rational minds in general doing so than the fact that most humans have two hands is indicative of most possible intelligent species converging on having two hands.
I was talking about rational minds converging on the moral claims, not on values.. Rational minds can converge on
“maximise group utility” whilst what is utilitous varies considerably.
Philosphers talk about intuitions, because that is the term for something foundational that seems true, but can’t be justified by anything more foundational. LessWrongians don’t like intuitions, but don’t see to be able to explain how to manage without them.
It seems like you’re equating intuitions with axioms here.
Axioms are formal statements, intuitions are gut feelings tha are often used to justify axioms.
We can (and should) recognize that our intuitions are frequently unhelpful at guiding us to he truth, without throwing out all axioms.
There is another sense of “intuition” where someone feels that it’s going to rain tomorrow or something. They’re
not the foundational kind.
And philosophers frequently fall into the pattern of believing that other philosophers disagree with each other due to failure to understand the problems they’re dealing with.
But claims about transfinities don’t correspond directly to any object. Maths is “spun off” from other facts, on your view. So, by analogy, moral realism could be “spun off” without needing any Form of the Good to correspond to goodness.
Spun off from what, and how?
You seem to be assumig that morality is about individual behaviour. A moral realist system like utiitarianism operates at the group level, and woud take paperclipper values into account along with all others. Utilitarianism doens’t care what values are, it just sums or averages them.
Speaking as a utilitarian, yes, utilitarianism does care about what values are. If I value paperclips, I assign utility to paperclips, if I don’t, I don’t.
Or perhaps you are making the objection that an entity woud need moral values to care about the preferences of others in the first place. That is addressed by, another kind of realism, the rationality-based kind, which starts from noting that rational agents have to have some value in common, because they are all rational.
Why does their being rational demand that they have values in common? Being rational means that they necessarily share a common process, namely rationality, but that process can be used to optimize many different, mutually contradictory things. Why should their values converge?
I was talking about rational minds converging on the moral claims, not on values.. Rational minds can converge on “maximise group utility” whilst what is utilitous varies considerably.
So what if a paperclipper arrives at “maximize group utility,” and the only relevant member of the group which shares its conception of utility is itself, and its only basis for measuring utility is paperclips? The fact that it shares the principle of maximizing utility doesn’t demand any overlap of end-goal with other utility maximizers.
Axioms are formal statements, intuitions are gut feelings tha are often used to justify axioms.
But, as I’ve pointed out previously, intuitions are often unhelpful, or even actively misleading, with respect to locating the truth.
If our axioms are grounded in our intuitions, then entities which don’t share our intuitions will not share our axioms.
So do they call for them to be fired?
No, but neither do I, so I don’t see why that’s relevant.
Request accepted, I’m not sure if he’s being deliberately obtuse, but I think this discussion probably would have borne fruit earlier if it were going to. I too often have difficulty stepping away from a discussion as soon as I think it’s unlikely to be a productive use of my time.
What is your basis for the designation ? I am not arguing with your suggestion (I was leaning in the same direction myself), I’m just genuinely curious. In other words, why do you believe that PrawnOfFate is a troll, and not someone who is genuinely confused ?
In other words, why do you believe that PrawnOfFate is a troll, and not someone who is genuinely confused ?
“Troll” is a somewhat fuzzy label. Sometimes when I am wanting to be precise or polite and avoid any hint of Fundamental Attribution Error I will replace it with the rather clumsy or verbose “person who is exhibiting a pattern of behaviour which should not be fed”. The difference between “Person who gets satisfaction from causing disruption” and “Person who is genuinely confused and is displaying an obnoxiously disruptive social attitude” is largely irrelevant (particularly when one has their Hansonian hat on).
If there was a word in popular use that meant “person likely to be disruptive and who should not be fed” that didn’t make any assumptions or implications of the intent of the accused then that word would be preferable.
I am not sure I can expalin that succintly at the moment. It is also hard to summarise how you get from counting apples to transfinite numbers.
Why does their being rational demand that they have values in common? Being rational means that they necessarily share a common process, namely rationality, but that process can be used to optimize many different, mutually contradictory things. Why should their values converge?
Rationality is not an automatic process, it is skill that has to be learnt and consciously applied. Individuals will only
be rational if their values prompt them to. And rationality itself implies valuing certain things (lack of bias, non arbitrariness).
So what if a paperclipper arrives at “maximize group utility,” and the only relevant member of the group which shares its conception of utility is itself, and its only basis for measuring utility is paperclips? The fact that it shares the principle of maximizing utility doesn’t demand any overlap of end-goal with other utility maximizers.
Utilitarians want to maximise the utiity of their groups, not their own utility. They don;t have to believe the utlity of others
is utilitous to them, they just need to feed facts about group utility into an aggregation function. And, using the same facts and same function, different utilitarians will converge. That’s kind of the point.
But, as I’ve pointed out previously, intuitions are often unhelpful, or even actively misleading, with respect to locating the truth.
Compared to what? Remember, I am talking about foundational intuitions, the kind at the bottom of the stack. The empirical method of locating the truth rests on the intuition that the senses reveal a real external world. Which I share. But what proves it? That’s the foundational issue.
But claims about transfinities don’t correspond directly to any object. Maths is “spun off” from other facts, on your view. So, by analogy, moral realism could be “spun off” without needing any Form of the Good to correspond to goodness.
You seem to be assumig that morality is about individual behaviour. A moral realist system like utiitarianism operates at the group level, and woud take paperclipper values into account along with all others. Utilitarianism doens’t care what values are, it just sums or averages them.
Or perhaps you are making the objection that an entity woud need moral values to care about the preferences of others in the first place. That is addressed by, another kind of realism, the rationality-based kind, which starts from noting that rational agents have to have some value in common, because they are all rational.
a) they don’t have to converge on preferences, since thing like utilitariansim are preference-neutral.
b) they already have to some extent because they are rational
I was talking about rational minds converging on the moral claims, not on values.. Rational minds can converge on “maximise group utility” whilst what is utilitous varies considerably.
Axioms are formal statements, intuitions are gut feelings tha are often used to justify axioms.
There is another sense of “intuition” where someone feels that it’s going to rain tomorrow or something. They’re not the foundational kind.
So do they call for them to be fired?
Spun off from what, and how?
Speaking as a utilitarian, yes, utilitarianism does care about what values are. If I value paperclips, I assign utility to paperclips, if I don’t, I don’t.
Why does their being rational demand that they have values in common? Being rational means that they necessarily share a common process, namely rationality, but that process can be used to optimize many different, mutually contradictory things. Why should their values converge?
So what if a paperclipper arrives at “maximize group utility,” and the only relevant member of the group which shares its conception of utility is itself, and its only basis for measuring utility is paperclips? The fact that it shares the principle of maximizing utility doesn’t demand any overlap of end-goal with other utility maximizers.
But, as I’ve pointed out previously, intuitions are often unhelpful, or even actively misleading, with respect to locating the truth.
If our axioms are grounded in our intuitions, then entities which don’t share our intuitions will not share our axioms.
No, but neither do I, so I don’t see why that’s relevant.
Designating PrawnOfFate a probable troll or sockpuppet. Suggest terminating discussion.
Request accepted, I’m not sure if he’s being deliberately obtuse, but I think this discussion probably would have borne fruit earlier if it were going to. I too often have difficulty stepping away from a discussion as soon as I think it’s unlikely to be a productive use of my time.
What is your basis for the designation ? I am not arguing with your suggestion (I was leaning in the same direction myself), I’m just genuinely curious. In other words, why do you believe that PrawnOfFate is a troll, and not someone who is genuinely confused ?
Combined behavior in other threads. Check the profile.
“Troll” is a somewhat fuzzy label. Sometimes when I am wanting to be precise or polite and avoid any hint of Fundamental Attribution Error I will replace it with the rather clumsy or verbose “person who is exhibiting a pattern of behaviour which should not be fed”. The difference between “Person who gets satisfaction from causing disruption” and “Person who is genuinely confused and is displaying an obnoxiously disruptive social attitude” is largely irrelevant (particularly when one has their Hansonian hat on).
If there was a word in popular use that meant “person likely to be disruptive and who should not be fed” that didn’t make any assumptions or implications of the intent of the accused then that word would be preferable.
I am not sure I can expalin that succintly at the moment. It is also hard to summarise how you get from counting apples to transfinite numbers.
Rationality is not an automatic process, it is skill that has to be learnt and consciously applied. Individuals will only be rational if their values prompt them to. And rationality itself implies valuing certain things (lack of bias, non arbitrariness).
Utilitarians want to maximise the utiity of their groups, not their own utility. They don;t have to believe the utlity of others is utilitous to them, they just need to feed facts about group utility into an aggregation function. And, using the same facts and same function, different utilitarians will converge. That’s kind of the point.
Compared to what? Remember, I am talking about foundational intuitions, the kind at the bottom of the stack. The empirical method of locating the truth rests on the intuition that the senses reveal a real external world. Which I share. But what proves it? That’s the foundational issue.