This also seems trivially true to me. I’ve successfully precommited multiple times in my life and I bet you have as well.
What you are probably talking about is the fact that occasionally humans fail at precommitments. But isn’t it an isolated demand for rigor? Humans occasionally fail at following any decision theory, or fail at being rational in general. It doesn’t make all the decision theories and rationality itself incoherent concept which we thus can’t talk about.
Actually, when I think about it, isn’t deciding what decision theory to follow, itself a precommitment?
Yes, but I was taking about humans. An AI might have a precommitment ability.
This also seems trivially true to me. I’ve successfully precommited multiple times in my life and I bet you have as well.
What you are probably talking about is the fact that occasionally humans fail at precommitments. But isn’t it an isolated demand for rigor? Humans occasionally fail at following any decision theory, or fail at being rational in general. It doesn’t make all the decision theories and rationality itself incoherent concept which we thus can’t talk about.
Actually, when I think about it, isn’t deciding what decision theory to follow, itself a precommitment?