You are the monarch in that society, you do not need to guess which role you’re being born into, you have that information. You don’t need to make all the slaves happy to help your goals, you can just maximize your goals directly. You may choose any moral principle you want to govern your actions. The Categorical Imperative would not give you the best result.
For what value of “best”? If the CI is the correct theory of morality, it will necessarily give your the morally best result. Maybe your complaint is that it wouldn’t maximise your personally utility. But I don’t see why you would expect that. Things like utilitarianism that seek to maximise group utility, don’t promise to make everyone blissfully happy individually. Some will lose out.
A different scenario: Clippy and Anti-Clippy sit in a room. Why can they not agree on epistemic facts about the most accurate laws of physics and other Aumann-mandated agreements, yet then go out and each optimize/reshape the world according to their own goals? Why would that make them not rational?
It would be irrational for Clippy to sing up to an agreement with Beady according to which Beady gets to
turn Clippy and all his clips into beads. It is irrational for agents to sign up to anyhting which is not in their interests, and it is not in their interests to have no contract at all. So rational agents, even if they do not
converge on all their goals, will negotiate contracts that minimise their disutility Clippy and Beady might take half the universe each.
Lastly, whatever Kant’s justification, why can you not optimize for a different principle—peak happiness versus average happiness, what makes any particular justifying principle correct across all—rational—agents.
If you think RAs can converge on an ultimately correct theory of physics (which we don’t have), what is to stop them converging on the correct theory of morality, which we also don’t have?
Not very rational for those to adopt a losing strategy (from their point of view), is it? Especially since they shouldn’t reason from a point of “I could be the king”. They aren’t, and they know that. No reason to ignore that information, unless they believe in some universal reincarnation or somesuch.
It is irrational for agents to sign up to anyhting which is not in their [added: current] interests
Yes. Which is why rational agents wouldn’t just go and change/compromise their terminal values, or their ethical judgements (=no convergence).
what is to stop them converging on the correct theory of morality, which we also don’t have?
Starting out with different interests. A strong clippy accommodating a weak beady wouldn’t be in its best self-interest. It could just employ a version of morality which is based on some tweaked axioms, yielding different results.
There are possibly good reasons for us as a race to aspire to working together. There are none for a domineering Clippy to take our interests into account, yielding to any supposedly “correct” morality would strictly damage its own interests.
Not very rational for those to adopt a losing strategy (from their point of view), is it? Especially since they shouldn’t reason from a point of “I could be the king”. They aren’t, and they know that. No reason to ignore that information, unless they believe in some universal reincarnation or somesuch.
Someone who adopts the “I don;t like X, but I respect peoples right to do it” approach is sacrificing some of their values to their evaluation of rationality and fairness. They would not do that if their rationality did not outweigh other values, But they are not having all their values maximally satisfied, so in that sense they are losing out.
Yes. Which is why rational agents wouldn’t just go and change/compromise their terminal values, or their ethical judgements (=no convergence).
There’s no evidence of terminal values. Judgements can be updated without changing values.
Starting out with different interests. A strong clippy accomodating a weak beady wouldn’t be in its best self-interest. It could just employ a version of morality which is based on some tweaked axioms, yielding different results.
Not all agents are interested in physics or maths. Doesn’t stop their claims being objetive.
It would be irrational for Clippy to sing up to an agreement with Beady according to which Beady gets to turn Clippy and all his clips into beads. It is irrational for agents to sign up to anyhting which is not in their interests, and it is not in their interests to have no contract at all. So rational agents, even if they do not converge on all their goals, will negotiate contracts that minimise their disutility Clippy and Beady might take half the universe each.
Not Beady, Anti-Clippy: an agent that is the precise opposite of Clippy. It wants to minimize the number of paperclips.
For what value of “best”? If the CI is the correct theory of morality, it will necessarily give your the morally best result. Maybe your complaint is that it wouldn’t maximise your personally utility. But I don’t see why you would expect that. Things like utilitarianism that seek to maximise group utility, don’t promise to make everyone blissfully happy individually. Some will lose out.
It would be irrational for Clippy to sing up to an agreement with Beady according to which Beady gets to turn Clippy and all his clips into beads. It is irrational for agents to sign up to anyhting which is not in their interests, and it is not in their interests to have no contract at all. So rational agents, even if they do not converge on all their goals, will negotiate contracts that minimise their disutility Clippy and Beady might take half the universe each.
If you think RAs can converge on an ultimately correct theory of physics (which we don’t have), what is to stop them converging on the correct theory of morality, which we also don’t have?
Not very rational for those to adopt a losing strategy (from their point of view), is it? Especially since they shouldn’t reason from a point of “I could be the king”. They aren’t, and they know that. No reason to ignore that information, unless they believe in some universal reincarnation or somesuch.
Yes. Which is why rational agents wouldn’t just go and change/compromise their terminal values, or their ethical judgements (=no convergence).
Starting out with different interests. A strong clippy accommodating a weak beady wouldn’t be in its best self-interest. It could just employ a version of morality which is based on some tweaked axioms, yielding different results.
There are possibly good reasons for us as a race to aspire to working together. There are none for a domineering Clippy to take our interests into account, yielding to any supposedly “correct” morality would strictly damage its own interests.
Someone who adopts the “I don;t like X, but I respect peoples right to do it” approach is sacrificing some of their values to their evaluation of rationality and fairness. They would not do that if their rationality did not outweigh other values, But they are not having all their values maximally satisfied, so in that sense they are losing out.
There’s no evidence of terminal values. Judgements can be updated without changing values.
Not all agents are interested in physics or maths. Doesn’t stop their claims being objetive.
Not Beady, Anti-Clippy: an agent that is the precise opposite of Clippy. It wants to minimize the number of paperclips.