The TDT users sees the problem as being that if he fights for a cause, that others may also fight for some less-important cause that they think is more important, leading to both causes being harmed. He responds by reducing his willingness to fight.
Someone who is morally uncertain (because he’s not omniscient) realizes that the cause he is fighting for might not be the most important one, and that other’s causes may actually be correct, which should reduce his willingness to fight by the same amount.
If we assume that all agents believe in the same complicated process for calculating the utilities, but are unsure how it works out in practice, then what they lack is a totally physical knowledge that should follow all the agreement theorems. If agent’s extrapolated volitions are not coherent, this is false.
The TDT users sees the problem as being that if he fights for a cause, that others may also fight for some less-important cause that they think is more important, leading to both causes being harmed. He responds by reducing his willingness to fight.
Someone who is morally uncertain (because he’s not omniscient) realizes that the cause he is fighting for might not be the most important one, and that other’s causes may actually be correct, which should reduce his willingness to fight by the same amount.
If we assume that all agents believe in the same complicated process for calculating the utilities, but are unsure how it works out in practice, then what they lack is a totally physical knowledge that should follow all the agreement theorems. If agent’s extrapolated volitions are not coherent, this is false.