Often in Knightian problems you are just screwed and there’s nothing rational you can do.
As you know, this attitude isn’t particularly common ’round these parts, and while I fall mostly in the “Decision theory can account for everything” camp, there may still be a point there. “Rational” isn’t really a category so much as a degree. Formally, it’s a function on actions that somehow measures how much that action corresponds to the perfect decision-theoretic action. My impression is that somewhere there’s Godelian consideration lurking, which is where the “Omega fines you exorbitantly for using TDT” thought experiment comes into play.
That thought experiment never bothered me much, as it just is what it is: a problem where you are just screwed, and there’s nothing rational you can do to improve your situation. You’ve already rightly programmed yourself to use TDT, and even your decision to stop using TDT would be made using TDT, and unless Omega is making exceptions for that particular choice (in which case you should self-modify to non-TDT), Omega is just a jerk that goes around fining rational people.
In such situations, the words “rational” and “irrational” are less useful descriptors than just observing source code being executed. If you’re formal about it using metric R, then you would be more R, but its correlation to “rational” wouldn’t really be at point.
But in this case, again, I think there’s a straightforward, simple, sensible approach (which so far no one has suggested...)
So, I don’t think the black box is really one of the situations I’ve described. It seems to me a decision theorist training herself to be more generally rational is in fact improving her odds at winning the black box game. All the approaches outlined so far do seem to also improve her odds. I don’t think a better solution exists, and she will often lose if she lacks time to reflect. But the more rational she is, the more often she will win.
As you know, this attitude isn’t particularly common ’round these parts, and while I fall mostly in the “Decision theory can account for everything” camp, there may still be a point there. “Rational” isn’t really a category so much as a degree. Formally, it’s a function on actions that somehow measures how much that action corresponds to the perfect decision-theoretic action. My impression is that somewhere there’s Godelian consideration lurking, which is where the “Omega fines you exorbitantly for using TDT” thought experiment comes into play.
That thought experiment never bothered me much, as it just is what it is: a problem where you are just screwed, and there’s nothing rational you can do to improve your situation. You’ve already rightly programmed yourself to use TDT, and even your decision to stop using TDT would be made using TDT, and unless Omega is making exceptions for that particular choice (in which case you should self-modify to non-TDT), Omega is just a jerk that goes around fining rational people.
In such situations, the words “rational” and “irrational” are less useful descriptors than just observing source code being executed. If you’re formal about it using metric R, then you would be more R, but its correlation to “rational” wouldn’t really be at point.
So, I don’t think the black box is really one of the situations I’ve described. It seems to me a decision theorist training herself to be more generally rational is in fact improving her odds at winning the black box game. All the approaches outlined so far do seem to also improve her odds. I don’t think a better solution exists, and she will often lose if she lacks time to reflect. But the more rational she is, the more often she will win.