If I’m interpreting the terms charitably, I think I put this more like 70%… which seems like a big enough numerical spread to count as disagreement—so upvoted!
My arguments here grows out of expectations about evolution, watching chickens interact with each other, rent seeking vs gains from trade (and game theory generally), Hobbe’s Leviathan, and personal musings about Fukuyama’s End Of History extrapolated into transhuman contexts, and more ideas in this vein.
It is quite likely that experiments to determine the contents of morality would themselves be unethical to carry out… but given arbitrary computing resources and no ethical constraints, I can imagine designing experiments about objective morality that would either shed light on its contents or else give evidence that no true theory exists which meets generally accepted criteria for a “theory of morality”.
But even then, being able to generate evidence about the absence of an objective object level “theory of morality” would itself seem to offer a strategy for taking a universally acceptable position on the general subject… which still seems to make this an area where objective and universal methods can provides moral insights. This dodge is friendly towards ideas in Nagel’s “Last Word”: “If we think at all, we must think of ourselves, individually and collectively, as submitting to the order of reasons rather than creating it.”
I almost agree with this due to fictional evidence from Three Worlds Collide, except that a manufactured intelligence such as an AI could be constructed without evolutionary constraints and saying that every possible descendant of a being that survived evolution MUST have a moral similarity to every other being seems like a much more complicated and less likely hypothesis.
If I’m interpreting the terms charitably, I think I put this more like 70%… which seems like a big enough numerical spread to count as disagreement—so upvoted!
My arguments here grows out of expectations about evolution, watching chickens interact with each other, rent seeking vs gains from trade (and game theory generally), Hobbe’s Leviathan, and personal musings about Fukuyama’s End Of History extrapolated into transhuman contexts, and more ideas in this vein.
It is quite likely that experiments to determine the contents of morality would themselves be unethical to carry out… but given arbitrary computing resources and no ethical constraints, I can imagine designing experiments about objective morality that would either shed light on its contents or else give evidence that no true theory exists which meets generally accepted criteria for a “theory of morality”.
But even then, being able to generate evidence about the absence of an objective object level “theory of morality” would itself seem to offer a strategy for taking a universally acceptable position on the general subject… which still seems to make this an area where objective and universal methods can provides moral insights. This dodge is friendly towards ideas in Nagel’s “Last Word”: “If we think at all, we must think of ourselves, individually and collectively, as submitting to the order of reasons rather than creating it.”
I almost agree with this due to fictional evidence from Three Worlds Collide, except that a manufactured intelligence such as an AI could be constructed without evolutionary constraints and saying that every possible descendant of a being that survived evolution MUST have a moral similarity to every other being seems like a much more complicated and less likely hypothesis.