I have to echo orthonormal: information, if processed without bias [availability bias, for example], should improve our decisions, and getting information is not always easy. I don’t see how this raises any questions about the rational process, or as you say, principled fashion.
“But by what principled fashion should you choose not to eat the fugu?”
This seems like a situation where the simplest expected value calculation would give you the ‘right’ answer. In this case, the expected value of eating oysters is 1, the expected value of eating the fugu is the expected value of eating an unknown dish, which you’d probably base on your prior experiences with unknown dishes offered for sale in restaurants of that type. [I assume you’d expect lower utility in some places than others.] In this case, that would kill you, but that is not a failure of rationality.
In a situation without the constraints of the example, research on fugu would obviously provide you with the info you need. A web-enabled phone and google would provide you with everything you need to know to make the right call.
Humans actually solve this type of problem all the time, though the scales are perhaps less. A driver on a road trip may settle for low-quality food [a fast food chain, representing the oysters] for the higher certainty of his expected value [convenience, uniform quality]. It’s simply the best use of available information.
Sorry I wasn’t clear, the expected value of oysters is not 1, that is the value you discover after eating. It is unknown, you haven’t had it before either. You have had other shell fish which have been dodgy.
Whether getting killed by fugu is a failure of rationality or not, it is a failure. It is not hitting a small target in the optimization space.
If you want modern examples of these sorts of problems not solvable by web phone, it is things like should we switch on the LHC, or create AI.
Whpearson----I think I do see some powerful points in your post that aren’t getting fully appreciated by the comments so far. It looks to me like you’re constructing a situation in which rationality won’t help. I think such situations necessarily exist in the realm of platonic possibility. In other words, it appears you provably cannot always win across all possible math structures; that is, I think your observation can be considered one instance of a no free lunch theorem.
My advice to you is that No Free Lunch is a fact and thus you must deal with it. You can’t win in all worlds, but maybe you can win in the world you’re in (assuming it’s not specially designed to thwart your efforts; in which case, you’re screwed). So just because rationality has limits, does not mean you shouldn’t still try to be rational. (Though also note I haven’t proven that one should be rational by any of the above).
Eli addressed this dilemma you’re mentioning in passing the recursive buck and elsewhere on overcoming bias)
My point is slightly different from NFL theorems. They say if you exhaustively search a problem then there are problems for the way you search that mean you will find the optimum last.
I’m trying to say there are problems where exhaustive search is something you don’t want to do. E.g. seeing what happens when you stick a knife into your heart or jumping into a bonfire. These problems also exist in real life, where as the NFL problems are harder to make the case that they exist in real life for any specific agent.
Wh- I definitely agree the point you’re making about knives etc., though I think one intepretation of the nfl as applying not to just to search but also to optimization makes your observation an instance of one type of nfl. Admittedly, there are some fine print assumptions that I think go under the term “almost no free lunch” when discussed.
I have to echo orthonormal: information, if processed without bias [availability bias, for example], should improve our decisions, and getting information is not always easy. I don’t see how this raises any questions about the rational process, or as you say, principled fashion.
“But by what principled fashion should you choose not to eat the fugu?”
This seems like a situation where the simplest expected value calculation would give you the ‘right’ answer. In this case, the expected value of eating oysters is 1, the expected value of eating the fugu is the expected value of eating an unknown dish, which you’d probably base on your prior experiences with unknown dishes offered for sale in restaurants of that type. [I assume you’d expect lower utility in some places than others.] In this case, that would kill you, but that is not a failure of rationality.
In a situation without the constraints of the example, research on fugu would obviously provide you with the info you need. A web-enabled phone and google would provide you with everything you need to know to make the right call.
Humans actually solve this type of problem all the time, though the scales are perhaps less. A driver on a road trip may settle for low-quality food [a fast food chain, representing the oysters] for the higher certainty of his expected value [convenience, uniform quality]. It’s simply the best use of available information.
Sorry I wasn’t clear, the expected value of oysters is not 1, that is the value you discover after eating. It is unknown, you haven’t had it before either. You have had other shell fish which have been dodgy.
Whether getting killed by fugu is a failure of rationality or not, it is a failure. It is not hitting a small target in the optimization space.
If you want modern examples of these sorts of problems not solvable by web phone, it is things like should we switch on the LHC, or create AI.
Whpearson----I think I do see some powerful points in your post that aren’t getting fully appreciated by the comments so far. It looks to me like you’re constructing a situation in which rationality won’t help. I think such situations necessarily exist in the realm of platonic possibility. In other words, it appears you provably cannot always win across all possible math structures; that is, I think your observation can be considered one instance of a no free lunch theorem.
My advice to you is that No Free Lunch is a fact and thus you must deal with it. You can’t win in all worlds, but maybe you can win in the world you’re in (assuming it’s not specially designed to thwart your efforts; in which case, you’re screwed). So just because rationality has limits, does not mean you shouldn’t still try to be rational. (Though also note I haven’t proven that one should be rational by any of the above).
Eli addressed this dilemma you’re mentioning in passing the recursive buck and elsewhere on overcoming bias)
My point is slightly different from NFL theorems. They say if you exhaustively search a problem then there are problems for the way you search that mean you will find the optimum last.
I’m trying to say there are problems where exhaustive search is something you don’t want to do. E.g. seeing what happens when you stick a knife into your heart or jumping into a bonfire. These problems also exist in real life, where as the NFL problems are harder to make the case that they exist in real life for any specific agent.
Wh- I definitely agree the point you’re making about knives etc., though I think one intepretation of the nfl as applying not to just to search but also to optimization makes your observation an instance of one type of nfl. Admittedly, there are some fine print assumptions that I think go under the term “almost no free lunch” when discussed.