I think this is concretely useful (and explored) as a psychological device to counter hyperbolic discounting. Think of what your future self will think about your action. Sometimes this is very useful, sometimes not.
There are also very formal, statistical methods for minimizing a mathematical quantity called regret (usually in the context of choosing between the advice of different oracles), that measures the difference in utility between taking the advice of the best overall oracle and what you actually did. The results on this aren’t strong enough to, say, help you win the stock market, but could probably be applied to certain types of self-tracking.
In cases with defined and controlled risk, e.g. investing, you should be prepared to not regret a loss if you believe your model was correct.
But in more general life decisions, where bias is much more prevalent, regret can be useful.
Holistically, the main message is “reflect on your actual experience and what you actually know about your preference, and not what you hope the future will be like or what you wish your own preferences were like”.
There are also very formal, statistical methods for minimizing a mathematical quantity called regret (usually in the context of choosing between the advice of different oracles), that measures the difference in utility between taking the advice of the best overall oracle and what you actually did.
Yes, I know. The actual point of the post was to provide a gedanken-experiment for oracle construction that doesn’t rely on a theory of value.
CEV and Railton’s moral realism both basically say, “I should do what my ideal adviser would say to do in my place”. This makes perfect sense, except that actually trying to construct an ideal adviser involves giving him perfect instrumental rationality, which—in the common definition of rationality as regret minimization or utility maximization—involves a theory of value, which is exactly what you were constructing in the first place.
So what I was doing here was trying to reduce those two theories one step closer to reality by saying, well, we still need to do some godawful thought-experiment, but we can do it in a way that doesn’t involve circular logic.
Any usefulness as an actual heuristic for motivating oneself to real action is completely coincidental, but pretty definitely existent.
I think this is concretely useful (and explored) as a psychological device to counter hyperbolic discounting. Think of what your future self will think about your action. Sometimes this is very useful, sometimes not.
There are also very formal, statistical methods for minimizing a mathematical quantity called regret (usually in the context of choosing between the advice of different oracles), that measures the difference in utility between taking the advice of the best overall oracle and what you actually did. The results on this aren’t strong enough to, say, help you win the stock market, but could probably be applied to certain types of self-tracking.
In cases with defined and controlled risk, e.g. investing, you should be prepared to not regret a loss if you believe your model was correct.
But in more general life decisions, where bias is much more prevalent, regret can be useful.
Holistically, the main message is “reflect on your actual experience and what you actually know about your preference, and not what you hope the future will be like or what you wish your own preferences were like”.
Yes, I know. The actual point of the post was to provide a gedanken-experiment for oracle construction that doesn’t rely on a theory of value.
CEV and Railton’s moral realism both basically say, “I should do what my ideal adviser would say to do in my place”. This makes perfect sense, except that actually trying to construct an ideal adviser involves giving him perfect instrumental rationality, which—in the common definition of rationality as regret minimization or utility maximization—involves a theory of value, which is exactly what you were constructing in the first place.
So what I was doing here was trying to reduce those two theories one step closer to reality by saying, well, we still need to do some godawful thought-experiment, but we can do it in a way that doesn’t involve circular logic.
Any usefulness as an actual heuristic for motivating oneself to real action is completely coincidental, but pretty definitely existent.