If I’m interpreting the terms charitably, I think I put this more like 70%… which seems like a big enough numerical spread to count as disagreement—so upvoted!
My arguments here grows out of expectations about evolution, watching chickens interact with each other, rent seeking vs gains from trade (and game theory generally), Hobbe’s Leviathan, and personal musings about Fukuyama’s End Of History extrapolated into transhuman contexts, and more ideas in this vein.
It is quite likely that experiments to determine the contents of morality would themselves be unethical to carry out… but given arbitrary computing resources and no ethical constraints, I can imagine designing experiments about objective morality that would either shed light on its contents or else give evidence that no true theory exists which meets generally accepted criteria for a “theory of morality”.
But even then, being able to generate evidence about the absence of an objective object level “theory of morality” would itself seem to offer a strategy for taking a universally acceptable position on the general subject… which still seems to make this an area where objective and universal methods can provides moral insights. This dodge is friendly towards ideas in Nagel’s “Last Word”: “If we think at all, we must think of ourselves, individually and collectively, as submitting to the order of reasons rather than creating it.”
I almost agree with this due to fictional evidence from Three Worlds Collide, except that a manufactured intelligence such as an AI could be constructed without evolutionary constraints and saying that every possible descendant of a being that survived evolution MUST have a moral similarity to every other being seems like a much more complicated and less likely hypothesis.
I’d say that it’s about as likely to apply to paperclippers or babyeaters as it is to us. While I think there’s a non-trivial chance that such a morality exists, I can’t even begin to speculate about what it might be or how it exists. There’s just a lot of uncertainty and very little either evidence.
The reason I think there’s a chance at all, for what it’s worth, is the existence of information theory. If information is a fundamental mathematical concept, I don’t think it’s inconceivable that there are all kinds of mathematical laws specifically about engines of cognition. Some of which may look like things we call morality.
Information theory is the wrong place to look for objective morality. Information is purely epistemic—i.e. about knowing. You need to look at game theory. That deals with wanting and doing. As far as I know, no one has had any moral issues with simply knowing since we got kicked out of the Garden of Eden. It is what we want and what we do that get us into moral trouble these days.
Here is a sketch of a game-theoretic golden rule: Form coalitions that are as large as possible. Act so as to yield the Nash bargaining solution in all games with coalition members—pretending that they have perfect information about your past actions, even though they may not actually have perfect information. Do your share to punish defectors and members of hostile coalitions, but forgive after fair punishment has been meted out. Treat neutral parties with indifference—if they have no power over you, you have no reason to apply your power over them in either direction.
This “objective morality” is strikingly different from the “inter-subjective morality” that evolution presumably installed in our human natures. But this may be an objective advantage if we have to make moral decisions regarding Baby Eaters who presumably received a different endowment from their own evolutionary history.
Do your share to punish defectors and members of hostile coalitions, but forgive after fair punishment has been meted out.
This does help bring clarity to the babyeaters’ actions: The babies are, by existing, defecting against the goal of having a decent standard of living for all adults. The eating is the ‘fair punishment’ that brings the situation back to equilibrium.
I suspect that we’d be better served by a less emotionally charged word than ‘punishment’ for that phenomenon in general, though.
Oh, I think “punishment” is just fine as a word to describe the proper treatment of defectors, and it is actually used routinely in the game-theory literature for that purpose. However, I’m not so sure I would agree that the babies in the story are being “punished”.
I would suggest that, as powerless agents not yet admitted to the coalition, they ought to be treated with indifference, perhaps to be destroyed like weeds, were no other issues involved. But there is something else involved—the babies are made into pariahs, something similar to a virgin sacrifice to the volcano god. Participation in the baby harvesting is transformed to a ritual social duty. Now that I think about it, it does seem more like voodoo than rational-agent game theory.
However, the game theory literature does contain examples where mutual self-punishment is required for an optimal solution, and a rule requiring requiring one to eat one’s own babies does at least provide some incentive to minimize the number of excess babies produced.
There is an objectively real morality. (10%) (I expect that most LWers assign this proposition a much lower probability.)
If I’m interpreting the terms charitably, I think I put this more like 70%… which seems like a big enough numerical spread to count as disagreement—so upvoted!
My arguments here grows out of expectations about evolution, watching chickens interact with each other, rent seeking vs gains from trade (and game theory generally), Hobbe’s Leviathan, and personal musings about Fukuyama’s End Of History extrapolated into transhuman contexts, and more ideas in this vein.
It is quite likely that experiments to determine the contents of morality would themselves be unethical to carry out… but given arbitrary computing resources and no ethical constraints, I can imagine designing experiments about objective morality that would either shed light on its contents or else give evidence that no true theory exists which meets generally accepted criteria for a “theory of morality”.
But even then, being able to generate evidence about the absence of an objective object level “theory of morality” would itself seem to offer a strategy for taking a universally acceptable position on the general subject… which still seems to make this an area where objective and universal methods can provides moral insights. This dodge is friendly towards ideas in Nagel’s “Last Word”: “If we think at all, we must think of ourselves, individually and collectively, as submitting to the order of reasons rather than creating it.”
I almost agree with this due to fictional evidence from Three Worlds Collide, except that a manufactured intelligence such as an AI could be constructed without evolutionary constraints and saying that every possible descendant of a being that survived evolution MUST have a moral similarity to every other being seems like a much more complicated and less likely hypothesis.
This probably isn’t what you had in mind, but any single complete human brain is a (or contains a) morality, and it’s objectively real.
Indeed, that was not at all what I meant.
Does the morality apply to paperclippers? Babyeaters?
I’d say that it’s about as likely to apply to paperclippers or babyeaters as it is to us. While I think there’s a non-trivial chance that such a morality exists, I can’t even begin to speculate about what it might be or how it exists. There’s just a lot of uncertainty and very little either evidence.
The reason I think there’s a chance at all, for what it’s worth, is the existence of information theory. If information is a fundamental mathematical concept, I don’t think it’s inconceivable that there are all kinds of mathematical laws specifically about engines of cognition. Some of which may look like things we call morality.
But most likely not.
Information theory is the wrong place to look for objective morality. Information is purely epistemic—i.e. about knowing. You need to look at game theory. That deals with wanting and doing. As far as I know, no one has had any moral issues with simply knowing since we got kicked out of the Garden of Eden. It is what we want and what we do that get us into moral trouble these days.
Here is a sketch of a game-theoretic golden rule: Form coalitions that are as large as possible. Act so as to yield the Nash bargaining solution in all games with coalition members—pretending that they have perfect information about your past actions, even though they may not actually have perfect information. Do your share to punish defectors and members of hostile coalitions, but forgive after fair punishment has been meted out. Treat neutral parties with indifference—if they have no power over you, you have no reason to apply your power over them in either direction.
This “objective morality” is strikingly different from the “inter-subjective morality” that evolution presumably installed in our human natures. But this may be an objective advantage if we have to make moral decisions regarding Baby Eaters who presumably received a different endowment from their own evolutionary history.
This does help bring clarity to the babyeaters’ actions: The babies are, by existing, defecting against the goal of having a decent standard of living for all adults. The eating is the ‘fair punishment’ that brings the situation back to equilibrium.
I suspect that we’d be better served by a less emotionally charged word than ‘punishment’ for that phenomenon in general, though.
Oh, I think “punishment” is just fine as a word to describe the proper treatment of defectors, and it is actually used routinely in the game-theory literature for that purpose. However, I’m not so sure I would agree that the babies in the story are being “punished”.
I would suggest that, as powerless agents not yet admitted to the coalition, they ought to be treated with indifference, perhaps to be destroyed like weeds, were no other issues involved. But there is something else involved—the babies are made into pariahs, something similar to a virgin sacrifice to the volcano god. Participation in the baby harvesting is transformed to a ritual social duty. Now that I think about it, it does seem more like voodoo than rational-agent game theory.
However, the game theory literature does contain examples where mutual self-punishment is required for an optimal solution, and a rule requiring requiring one to eat one’s own babies does at least provide some incentive to minimize the number of excess babies produced.
Does that “game-theoretic golden rule” even tell you how to behave?
Do you also think there is a means or mechanism for humans to discover and verify the objectively real morality? If so, what could it be?
I would assume any objectively real morality would be in some way entailed by the physical universe, and therefore in theory discoverable.
I wouldn’t say that a thing existed if it could not interact in any causal way with our universe.
I expect a plurality may vote as you expect, but 10% seems reasonable based on my current state of knowledge.
Voted up for under-confidence. God exists, and he defined morality the same way he defined the laws of physics.