Spohn shows that you can draw causal graphs such that CDT can get rewards in both cases, though only under the assumption that true precommitment is possible. But Spohn doesn’t give arguments for the possibility of precommitment, as far as I can tell.
Isn’t the possibility and, moreover, computability of precommitmet just trivially true?
If you have programm DT(data), determinimg a decision according to a particular decision theory in the circumstances, specified by data, then you can easily construct a program PDT(data), determining the decision for the same decision theory but with precommitment:
This also seems trivially true to me. I’ve successfully precommited multiple times in my life and I bet you have as well.
What you are probably talking about is the fact that occasionally humans fail at precommitments. But isn’t it an isolated demand for rigor? Humans occasionally fail at following any decision theory, or fail at being rational in general. It doesn’t make all the decision theories and rationality itself incoherent concept which we thus can’t talk about.
Actually, when I think about it, isn’t deciding what decision theory to follow, itself a precommitment?
Spohn shows that you can draw causal graphs such that CDT can get rewards in both cases, though only under the assumption that true precommitment is possible. But Spohn doesn’t give arguments for the possibility of precommitment, as far as I can tell.
Isn’t the possibility and, moreover, computability of precommitmet just trivially true?
If you have programm DT(data), determinimg a decision according to a particular decision theory in the circumstances, specified by data, then you can easily construct a program PDT(data), determining the decision for the same decision theory but with precommitment:
The only thing that is required is an if-statement and memory object which can be implemented via a dictionary.
Yes, but I was taking about humans. An AI might have a precommitment ability.
This also seems trivially true to me. I’ve successfully precommited multiple times in my life and I bet you have as well.
What you are probably talking about is the fact that occasionally humans fail at precommitments. But isn’t it an isolated demand for rigor? Humans occasionally fail at following any decision theory, or fail at being rational in general. It doesn’t make all the decision theories and rationality itself incoherent concept which we thus can’t talk about.
Actually, when I think about it, isn’t deciding what decision theory to follow, itself a precommitment?