Would someone implementing TDT or UDT to the best of their ability maximize their wisdom for a given intelligence/knowledge level?
I suspect part of what we ordinarily call “wisdom” involves having the right sort of utility function, which is not something that your decision theory can police. If someone were implementing TDT perfectly in order to fulfill their paperclip-maximizing desires, I doubt we would characterize them as wise.
I agree that a sizable component of wisdom is the choice of utility functions, and some UFs are certainly less wise than others, like in your example (I nearly included it in the OP, actually). However, the means of maximizing utility matters just as much, as some of those actions might backfire spectacularly. For example, a preemptive nuclear strike could be considered a means to secure future (if you are JFK during the Cuban missile crisis), but one can hardly call it wise. Hence my point about calibration.
I suspect part of what we ordinarily call “wisdom” involves having the right sort of utility function, which is not something that your decision theory can police. If someone were implementing TDT perfectly in order to fulfill their paperclip-maximizing desires, I doubt we would characterize them as wise.
I agree that a sizable component of wisdom is the choice of utility functions, and some UFs are certainly less wise than others, like in your example (I nearly included it in the OP, actually). However, the means of maximizing utility matters just as much, as some of those actions might backfire spectacularly. For example, a preemptive nuclear strike could be considered a means to secure future (if you are JFK during the Cuban missile crisis), but one can hardly call it wise. Hence my point about calibration.