If they have limited information on the good, wouldn’t a conversation invoke a kind of ethical Aumann’s Agreement Theorem?
In general, if everyone agrees about some morality and disagrees about what it entails, that’s a disagreement over facts, and confusion over facts will cause problems in any decision theory.
wouldn’t a conversation invoke a kind of ethical Aumann’s Agreement Theorem?
Yes, if there is time for a polite conversation before making an ethical decision. Too bad that the manufacturers of trolley problems usually don’t allow enough time for idle chit-chat.
Still, it is an interesting conjecture. The eAAT conjecture. Can we find a proof? A counter-example?
Here is an attempt at a counter-example. I strongly prefer to keep my sexual orientation secret from you. You only mildly prefer to know my sexual orientation. Thus, it might seem that my orientation should remain secret. But then we risk that I will receive inappropriate birthday gifts from you. Or, what if I prefer to keep secret the fact that I have been diagnosed with an incurable fatal disease? What if I wish to keep this a secret only to spare your feelings?
Of course, we can avoid this kind of problem by supplementing our utility maximization principle with a second moral axiom—No Secrets. Can we add this axiom and still call ourselves pure utilitarians? Can we be mathematically consistent utilitarians without this axiom? I’ll leave this debate to others.
It is an interesting exercise, though, to revisit the von Neumann/Savage/Aumann-Anscombe algorithms for constructing utility functions when agents are allowed to keep some of their preferences secret. Agents still would know their own utilities exactly, but would only have a range (or a pdf?) for the utilities of other agents. It might be illuminating to reconstruct game theory and utilitarian ethics incorporating this twist.
The TDT users sees the problem as being that if he fights for a cause, that others may also fight for some less-important cause that they think is more important, leading to both causes being harmed. He responds by reducing his willingness to fight.
Someone who is morally uncertain (because he’s not omniscient) realizes that the cause he is fighting for might not be the most important one, and that other’s causes may actually be correct, which should reduce his willingness to fight by the same amount.
If we assume that all agents believe in the same complicated process for calculating the utilities, but are unsure how it works out in practice, then what they lack is a totally physical knowledge that should follow all the agreement theorems. If agent’s extrapolated volitions are not coherent, this is false.
If they have limited information on the good, wouldn’t a conversation invoke a kind of ethical Aumann’s Agreement Theorem?
In general, if everyone agrees about some morality and disagrees about what it entails, that’s a disagreement over facts, and confusion over facts will cause problems in any decision theory.
Yes, if there is time for a polite conversation before making an ethical decision. Too bad that the manufacturers of trolley problems usually don’t allow enough time for idle chit-chat.
Still, it is an interesting conjecture. The eAAT conjecture. Can we find a proof? A counter-example?
Here is an attempt at a counter-example. I strongly prefer to keep my sexual orientation secret from you. You only mildly prefer to know my sexual orientation. Thus, it might seem that my orientation should remain secret. But then we risk that I will receive inappropriate birthday gifts from you. Or, what if I prefer to keep secret the fact that I have been diagnosed with an incurable fatal disease? What if I wish to keep this a secret only to spare your feelings?
Of course, we can avoid this kind of problem by supplementing our utility maximization principle with a second moral axiom—No Secrets. Can we add this axiom and still call ourselves pure utilitarians? Can we be mathematically consistent utilitarians without this axiom? I’ll leave this debate to others.
It is an interesting exercise, though, to revisit the von Neumann/Savage/Aumann-Anscombe algorithms for constructing utility functions when agents are allowed to keep some of their preferences secret. Agents still would know their own utilities exactly, but would only have a range (or a pdf?) for the utilities of other agents. It might be illuminating to reconstruct game theory and utilitarian ethics incorporating this twist.
The TDT users sees the problem as being that if he fights for a cause, that others may also fight for some less-important cause that they think is more important, leading to both causes being harmed. He responds by reducing his willingness to fight.
Someone who is morally uncertain (because he’s not omniscient) realizes that the cause he is fighting for might not be the most important one, and that other’s causes may actually be correct, which should reduce his willingness to fight by the same amount.
If we assume that all agents believe in the same complicated process for calculating the utilities, but are unsure how it works out in practice, then what they lack is a totally physical knowledge that should follow all the agreement theorems. If agent’s extrapolated volitions are not coherent, this is false.