Agreed. I think one could assert “Given a perfect decision theory AND a perfect implementation, additional information is never a negative”, but it’s silly to live as though that were true. If you know your decision theory doesn’t handle X information correctly (say, sunken costs) then it’s in your best interests to either eliminate the information, or fix the decision theory.
Of course, eliminating information seems to be by far the easier option...
If I know the class of errors my decision theory tends to make given the kinds of Xes I most commonly run into, I can also adopt a third option… for want of a better term, I can patch my decision theory. E.g., “Well, I want to finish this project, but I suspect that part of that desire stems from an invalid weighting of sunk costs, so I won’t take that desire at face value… I’ll apply some kind of rough-and-ready discounting factor to it.” This is clearly not as good as actually fixing my decision theory, but isn’t as hard either, and is sometimes more practical than eliminating the information.
Very true. However, “avoid X information, since it biases me” is actually an example of such a patch. Especially if the information doesn’t otherwise have any useful value. How often does knowledge of sunk costs actually move you towards ideal action, rather than biasing you away from it?
Sure, avoiding information is an example of patching a decision theory, agreed.
So I guess what I’m saying is that “either eliminate the information, or fix the decision theory” is a misleading way to phrase the choice. My real choice is between fixing it and patching it, where eliminating the information is one of several ways to patch it, and not always the best.
Making choices about future investments in ignorance of the existing data I have about previous investments and their ROI is probably less ideal than taking those data into consideration and applying some other patch to compensate for sunk-costing.
Agreed. I think one could assert “Given a perfect decision theory AND a perfect implementation, additional information is never a negative”, but it’s silly to live as though that were true. If you know your decision theory doesn’t handle X information correctly (say, sunken costs) then it’s in your best interests to either eliminate the information, or fix the decision theory.
Of course, eliminating information seems to be by far the easier option...
If I know the class of errors my decision theory tends to make given the kinds of Xes I most commonly run into, I can also adopt a third option… for want of a better term, I can patch my decision theory. E.g., “Well, I want to finish this project, but I suspect that part of that desire stems from an invalid weighting of sunk costs, so I won’t take that desire at face value… I’ll apply some kind of rough-and-ready discounting factor to it.” This is clearly not as good as actually fixing my decision theory, but isn’t as hard either, and is sometimes more practical than eliminating the information.
Very true. However, “avoid X information, since it biases me” is actually an example of such a patch. Especially if the information doesn’t otherwise have any useful value. How often does knowledge of sunk costs actually move you towards ideal action, rather than biasing you away from it?
Sure, avoiding information is an example of patching a decision theory, agreed.
So I guess what I’m saying is that “either eliminate the information, or fix the decision theory” is a misleading way to phrase the choice. My real choice is between fixing it and patching it, where eliminating the information is one of several ways to patch it, and not always the best.
Making choices about future investments in ignorance of the existing data I have about previous investments and their ROI is probably less ideal than taking those data into consideration and applying some other patch to compensate for sunk-costing.
I like the idea of phrasing it as “patching vs long-term fixes” :)