Not very rational for those to adopt a losing strategy (from their point of view), is it? Especially since they shouldn’t reason from a point of “I could be the king”. They aren’t, and they know that. No reason to ignore that information, unless they believe in some universal reincarnation or somesuch.
It is irrational for agents to sign up to anyhting which is not in their [added: current] interests
Yes. Which is why rational agents wouldn’t just go and change/compromise their terminal values, or their ethical judgements (=no convergence).
what is to stop them converging on the correct theory of morality, which we also don’t have?
Starting out with different interests. A strong clippy accommodating a weak beady wouldn’t be in its best self-interest. It could just employ a version of morality which is based on some tweaked axioms, yielding different results.
There are possibly good reasons for us as a race to aspire to working together. There are none for a domineering Clippy to take our interests into account, yielding to any supposedly “correct” morality would strictly damage its own interests.
Not very rational for those to adopt a losing strategy (from their point of view), is it? Especially since they shouldn’t reason from a point of “I could be the king”. They aren’t, and they know that. No reason to ignore that information, unless they believe in some universal reincarnation or somesuch.
Someone who adopts the “I don;t like X, but I respect peoples right to do it” approach is sacrificing some of their values to their evaluation of rationality and fairness. They would not do that if their rationality did not outweigh other values, But they are not having all their values maximally satisfied, so in that sense they are losing out.
Yes. Which is why rational agents wouldn’t just go and change/compromise their terminal values, or their ethical judgements (=no convergence).
There’s no evidence of terminal values. Judgements can be updated without changing values.
Starting out with different interests. A strong clippy accomodating a weak beady wouldn’t be in its best self-interest. It could just employ a version of morality which is based on some tweaked axioms, yielding different results.
Not all agents are interested in physics or maths. Doesn’t stop their claims being objetive.
Not very rational for those to adopt a losing strategy (from their point of view), is it? Especially since they shouldn’t reason from a point of “I could be the king”. They aren’t, and they know that. No reason to ignore that information, unless they believe in some universal reincarnation or somesuch.
Yes. Which is why rational agents wouldn’t just go and change/compromise their terminal values, or their ethical judgements (=no convergence).
Starting out with different interests. A strong clippy accommodating a weak beady wouldn’t be in its best self-interest. It could just employ a version of morality which is based on some tweaked axioms, yielding different results.
There are possibly good reasons for us as a race to aspire to working together. There are none for a domineering Clippy to take our interests into account, yielding to any supposedly “correct” morality would strictly damage its own interests.
Someone who adopts the “I don;t like X, but I respect peoples right to do it” approach is sacrificing some of their values to their evaluation of rationality and fairness. They would not do that if their rationality did not outweigh other values, But they are not having all their values maximally satisfied, so in that sense they are losing out.
There’s no evidence of terminal values. Judgements can be updated without changing values.
Not all agents are interested in physics or maths. Doesn’t stop their claims being objetive.