But is the justification for its global applicability that “if everyone lived by that rule, average happiness would be maximized”?
Well, not, that’s not Kant’s justification!
That (or any other such consideration) itself is not a mandatory goal, but a chosen one.
Why would a rational agent choose unhappiness?
If you find yourself to be the worshipped god-king in some ancient Mesopotanian culture, there may be many more effective ways to make yourself happy, other than the Categorical Imperative.
Yes, but that wouldn’t count as ethics. You wouldn’t want a Universal Law that one guy gets the harem, and everyone else is a slave, because you wouldn’t want to be a slave, and you probably would be.
This is brought out in Rawls’ version of Kantian ethics: you pretend to yourself that you are behind a veil that prevents you knowing what
role in society you are going to have, and choose rules that you would want to have if you were
to enter society at random.
My argument against moral realism and assorted is that if you had an axiomatic system from which it followed that strawberry is the best flavor of ice cream, but other agents which are just as intelligent with just as much optimizing power could use different axiomatic systems leading to different conclusions,
You don’t have object-level stuff like ice cream or paperclips in your axioms (maxims), you have abstract stuff,
like the Categorical Imperative. You then arrive at object level ethics by plugging in details of actual circumstances and values. These will vary, but not in an arbitrary way, as is the disadvantage of anything-goes relativism.
how could one such system possibly be taken to be globally correct and compelling-to-adopt across agents with different goals?
The idea is that things like the CI have rational appeal.
Once you’ve taken a rational agent apart and know its goals and, as a component, its ethical subroutines, there is no further “core spark” which really yearns to adopt the Categorical Imperative.
Rational agents will converge on a number of things because they are rational. None of them will think 2+2-=5.
1) You wake up in a bright box of light, no memories. You are told you’ll presently be born into an Absolute monarchy, your role randomly chosen. You may choose any moral principles that should govern that society. The Categorical Imperative would on average give you the best result.
2) You are the monarch in that society, you do not need to guess which role you’re being born into, you have that information. You don’t need to make all the slaves happy to help your goals, you can just maximize your goals directly. You may choose any moral principle you want to govern your actions. The Categorical Imperative would not give you the best result.
A different scenario: Clippy and Anti-Clippy sit in a room. Why can they not agree on epistemic facts about the most accurate laws of physics and other Aumann-mandated agreements, yet then go out and each optimize/reshape the world according to their own goals? Why would that make them not rational?
Lastly, whatever Kant’s justification, why can you not optimize for a different principle—peak happiness versus average happiness, what makes any particular justifying principle correct across all—rational—agents. Here come my algae!
You are the monarch in that society, you do not need to guess which role you’re being born into, you have that information. You don’t need to make all the slaves happy to help your goals, you can just maximize your goals directly. You may choose any moral principle you want to govern your actions. The Categorical Imperative would not give you the best result.
For what value of “best”? If the CI is the correct theory of morality, it will necessarily give your the morally best result. Maybe your complaint is that it wouldn’t maximise your personally utility. But I don’t see why you would expect that. Things like utilitarianism that seek to maximise group utility, don’t promise to make everyone blissfully happy individually. Some will lose out.
A different scenario: Clippy and Anti-Clippy sit in a room. Why can they not agree on epistemic facts about the most accurate laws of physics and other Aumann-mandated agreements, yet then go out and each optimize/reshape the world according to their own goals? Why would that make them not rational?
It would be irrational for Clippy to sing up to an agreement with Beady according to which Beady gets to
turn Clippy and all his clips into beads. It is irrational for agents to sign up to anyhting which is not in their interests, and it is not in their interests to have no contract at all. So rational agents, even if they do not
converge on all their goals, will negotiate contracts that minimise their disutility Clippy and Beady might take half the universe each.
Lastly, whatever Kant’s justification, why can you not optimize for a different principle—peak happiness versus average happiness, what makes any particular justifying principle correct across all—rational—agents.
If you think RAs can converge on an ultimately correct theory of physics (which we don’t have), what is to stop them converging on the correct theory of morality, which we also don’t have?
Not very rational for those to adopt a losing strategy (from their point of view), is it? Especially since they shouldn’t reason from a point of “I could be the king”. They aren’t, and they know that. No reason to ignore that information, unless they believe in some universal reincarnation or somesuch.
It is irrational for agents to sign up to anyhting which is not in their [added: current] interests
Yes. Which is why rational agents wouldn’t just go and change/compromise their terminal values, or their ethical judgements (=no convergence).
what is to stop them converging on the correct theory of morality, which we also don’t have?
Starting out with different interests. A strong clippy accommodating a weak beady wouldn’t be in its best self-interest. It could just employ a version of morality which is based on some tweaked axioms, yielding different results.
There are possibly good reasons for us as a race to aspire to working together. There are none for a domineering Clippy to take our interests into account, yielding to any supposedly “correct” morality would strictly damage its own interests.
Not very rational for those to adopt a losing strategy (from their point of view), is it? Especially since they shouldn’t reason from a point of “I could be the king”. They aren’t, and they know that. No reason to ignore that information, unless they believe in some universal reincarnation or somesuch.
Someone who adopts the “I don;t like X, but I respect peoples right to do it” approach is sacrificing some of their values to their evaluation of rationality and fairness. They would not do that if their rationality did not outweigh other values, But they are not having all their values maximally satisfied, so in that sense they are losing out.
Yes. Which is why rational agents wouldn’t just go and change/compromise their terminal values, or their ethical judgements (=no convergence).
There’s no evidence of terminal values. Judgements can be updated without changing values.
Starting out with different interests. A strong clippy accomodating a weak beady wouldn’t be in its best self-interest. It could just employ a version of morality which is based on some tweaked axioms, yielding different results.
Not all agents are interested in physics or maths. Doesn’t stop their claims being objetive.
It would be irrational for Clippy to sing up to an agreement with Beady according to which Beady gets to turn Clippy and all his clips into beads. It is irrational for agents to sign up to anyhting which is not in their interests, and it is not in their interests to have no contract at all. So rational agents, even if they do not converge on all their goals, will negotiate contracts that minimise their disutility Clippy and Beady might take half the universe each.
Not Beady, Anti-Clippy: an agent that is the precise opposite of Clippy. It wants to minimize the number of paperclips.
Yes, but that wouldn’t count as ethics. You wouldn’t want a Universal Law that one guy gets the harem, and everyone else is a slave, because you wouldn’t want to be a slave, and you probably would be.
If there are a lot of similar agents in similar positions, Kantian ethics works, no matter what their goals. For example, theft may appear to have positive expected value—assuming you’re selfish—but it has positive expected value for lots of people, and if they all stole the economy would collapse.
OTOH, if you are in an unusual position, the Categorical Imperative only has force if you take it as axiomatic.
This is brought out in Rawls’ version of Kantian ethics: you pretend to yourself that you are behind a veil that prevents you knowing what role in society you are going to have, and choose rules that you would want to have if you were to enter society at random.
That’s not a version of Kantian ethics, it’s a hack for designing a society without privileging yourself. If you’re selfish, it’s a bad idea.
Well, not, that’s not Kant’s justification!
Why would a rational agent choose unhappiness?
Yes, but that wouldn’t count as ethics. You wouldn’t want a Universal Law that one guy gets the harem, and everyone else is a slave, because you wouldn’t want to be a slave, and you probably would be. This is brought out in Rawls’ version of Kantian ethics: you pretend to yourself that you are behind a veil that prevents you knowing what role in society you are going to have, and choose rules that you would want to have if you were to enter society at random.
You don’t have object-level stuff like ice cream or paperclips in your axioms (maxims), you have abstract stuff, like the Categorical Imperative. You then arrive at object level ethics by plugging in details of actual circumstances and values. These will vary, but not in an arbitrary way, as is the disadvantage of anything-goes relativism.
The idea is that things like the CI have rational appeal.
Rational agents will converge on a number of things because they are rational. None of them will think 2+2-=5.
Scenario:
1) You wake up in a bright box of light, no memories. You are told you’ll presently be born into an Absolute monarchy, your role randomly chosen. You may choose any moral principles that should govern that society. The Categorical Imperative would on average give you the best result.
2) You are the monarch in that society, you do not need to guess which role you’re being born into, you have that information. You don’t need to make all the slaves happy to help your goals, you can just maximize your goals directly. You may choose any moral principle you want to govern your actions. The Categorical Imperative would not give you the best result.
A different scenario: Clippy and Anti-Clippy sit in a room. Why can they not agree on epistemic facts about the most accurate laws of physics and other Aumann-mandated agreements, yet then go out and each optimize/reshape the world according to their own goals? Why would that make them not rational?
Lastly, whatever Kant’s justification, why can you not optimize for a different principle—peak happiness versus average happiness, what makes any particular justifying principle correct across all—rational—agents. Here come my algae!
For what value of “best”? If the CI is the correct theory of morality, it will necessarily give your the morally best result. Maybe your complaint is that it wouldn’t maximise your personally utility. But I don’t see why you would expect that. Things like utilitarianism that seek to maximise group utility, don’t promise to make everyone blissfully happy individually. Some will lose out.
It would be irrational for Clippy to sing up to an agreement with Beady according to which Beady gets to turn Clippy and all his clips into beads. It is irrational for agents to sign up to anyhting which is not in their interests, and it is not in their interests to have no contract at all. So rational agents, even if they do not converge on all their goals, will negotiate contracts that minimise their disutility Clippy and Beady might take half the universe each.
If you think RAs can converge on an ultimately correct theory of physics (which we don’t have), what is to stop them converging on the correct theory of morality, which we also don’t have?
Not very rational for those to adopt a losing strategy (from their point of view), is it? Especially since they shouldn’t reason from a point of “I could be the king”. They aren’t, and they know that. No reason to ignore that information, unless they believe in some universal reincarnation or somesuch.
Yes. Which is why rational agents wouldn’t just go and change/compromise their terminal values, or their ethical judgements (=no convergence).
Starting out with different interests. A strong clippy accommodating a weak beady wouldn’t be in its best self-interest. It could just employ a version of morality which is based on some tweaked axioms, yielding different results.
There are possibly good reasons for us as a race to aspire to working together. There are none for a domineering Clippy to take our interests into account, yielding to any supposedly “correct” morality would strictly damage its own interests.
Someone who adopts the “I don;t like X, but I respect peoples right to do it” approach is sacrificing some of their values to their evaluation of rationality and fairness. They would not do that if their rationality did not outweigh other values, But they are not having all their values maximally satisfied, so in that sense they are losing out.
There’s no evidence of terminal values. Judgements can be updated without changing values.
Not all agents are interested in physics or maths. Doesn’t stop their claims being objetive.
Not Beady, Anti-Clippy: an agent that is the precise opposite of Clippy. It wants to minimize the number of paperclips.
If there are a lot of similar agents in similar positions, Kantian ethics works, no matter what their goals. For example, theft may appear to have positive expected value—assuming you’re selfish—but it has positive expected value for lots of people, and if they all stole the economy would collapse.
OTOH, if you are in an unusual position, the Categorical Imperative only has force if you take it as axiomatic.
That’s not a version of Kantian ethics, it’s a hack for designing a society without privileging yourself. If you’re selfish, it’s a bad idea.