I don’t think there is a UDT-idea that prescribes cooperating with non-UDT agents. UDT is sufficiently formalized that we know what happens if a UDT agent plays a prisoner’s dilemma with a CDT agent and both parties know each other’s algorithm/code: they both defect.
If you want to cooperate out of altruism, I think the solution is to model the game differently. The outputs that go into the game theory model should be whatever your utility function says, not your well-being. So if you value the other person’s well-being as much as yours, then you don’t have a prisoner’s dilemma because cooperate/defect is a better outcome for you than defect/defect.
by buying food they are limiting other’s chances to buy it.
But they’re only doing that if there will, in fact, be a supply shortage. That was my initial point – it depends on how many other people will stockpile food.
What worries me here is that while playing, say, prisoner dilemma, an agent needs to perform an act of communication with another prisoner to learn her decision theory, which kills all the problem: if we can communicate, we can have some coordination strategy. In one shot prisoner’s dilemma we don’t know if the other side UDT or CDT agent, and other side also don’t know this about us. So the both are using similar lines of reasoning trying to guess if other agent is CDT or UDT. This similar reasoning itself could be a subject of UDT on meta-level, as we both would win more, if we assume that the other agent is UDT-agent.
I don’t think there is a UDT-idea that prescribes cooperating with non-UDT agents. UDT is sufficiently formalized that we know what happens if a UDT agent plays a prisoner’s dilemma with a CDT agent and both parties know each other’s algorithm/code: they both defect.
If you want to cooperate out of altruism, I think the solution is to model the game differently. The outputs that go into the game theory model should be whatever your utility function says, not your well-being. So if you value the other person’s well-being as much as yours, then you don’t have a prisoner’s dilemma because cooperate/defect is a better outcome for you than defect/defect.
But they’re only doing that if there will, in fact, be a supply shortage. That was my initial point – it depends on how many other people will stockpile food.
What worries me here is that while playing, say, prisoner dilemma, an agent needs to perform an act of communication with another prisoner to learn her decision theory, which kills all the problem: if we can communicate, we can have some coordination strategy. In one shot prisoner’s dilemma we don’t know if the other side UDT or CDT agent, and other side also don’t know this about us. So the both are using similar lines of reasoning trying to guess if other agent is CDT or UDT. This similar reasoning itself could be a subject of UDT on meta-level, as we both would win more, if we assume that the other agent is UDT-agent.