An example in this case would be actually describing a situation where an agent has to make a decision based on specified available information, and an analysis of what decision UDT and whatever decision theory you would like to compare it to make, and what happens to agents that make those decisions.
Essentially, it sounds to me a lot like “Odin made physics”- it sounds like a rationalization that adds complexity without adding value.
It is more like: relativity accurately describes things that go fast, and agrees with Newtonian physics about things that go slow like we are used to.
sunk costs fallacy
The sunk cost fallacy is caring more about making a previous investment payoff than getting the best payoff on your current decision. Where is the previous investment in counterfactual mugging?
I don’t have a proper response for you, but this came from thinking about your comments and you may be interested in it.
At the moment, I can’t wrap my head around what it actually means to do math with UDT. If it’s truly updateless, then it’s worthless because a decision theory that ignores evidence is terrible. If it updates in a bizarre fashion, I’m not sure how that’s different from updating normally. It seems like UDT is designed specifically to do well on these sorts of problems, but I think that’s a horrible criterion (as explained in the linked post), and I don’t see it behaving differently from simple second-order game theory. It’s different from first-order game theory, but that’s not its competitor.
An example in this case would be actually describing a situation where an agent has to make a decision based on specified available information, and an analysis of what decision UDT and whatever decision theory you would like to compare it to make, and what happens to agents that make those decisions.
It is more like: relativity accurately describes things that go fast, and agrees with Newtonian physics about things that go slow like we are used to.
The sunk cost fallacy is caring more about making a previous investment payoff than getting the best payoff on your current decision. Where is the previous investment in counterfactual mugging?
I don’t have a proper response for you, but this came from thinking about your comments and you may be interested in it.
At the moment, I can’t wrap my head around what it actually means to do math with UDT. If it’s truly updateless, then it’s worthless because a decision theory that ignores evidence is terrible. If it updates in a bizarre fashion, I’m not sure how that’s different from updating normally. It seems like UDT is designed specifically to do well on these sorts of problems, but I think that’s a horrible criterion (as explained in the linked post), and I don’t see it behaving differently from simple second-order game theory. It’s different from first-order game theory, but that’s not its competitor.