I like the word “optimal”, and it frequently overlaps with “rational”, but they aren’t interchangeable. I think the key difference is that the word “rationality” carries along an implicit human actor with beliefs, knowledge and goals. On the other hand, the word “optimal” frequently ends up pointed at narrow, abstract goals, omniscient viewpoints and unavailable options.
“Investing in ABCD turned out to be suboptimal, but it was rational because he couldn’t have known their factory would be hit by an earthquake.” (Optimality is with respect to a state of perfect information, but past-tense rationality is with respect to the information that was actually available.) ”Sending a letter was suboptimal, but it was rational because I didn’t have an email address.” (Optimization is over an implied set of contact methods, rationality counts only those that are available.)
“That algorithm is optimal, but using it would be irrational because it’s complicated. (Implied narrow goal: computational efficiency. Actual goal system includes simple implementation.)
Since talking about rationality frequently means talking about how to deal with limited information and clarifying confusions about goals, the word “optimal” doesn’t always work. There’s another important difference, too, which is that “rationality” is more meta; and this protects it somewhat, in that I’d expect a book about “optimal decision-making” to tell me what to do, and a book about “rational decision-making” to tell me how to decide. The latter is more trustworthy, since the layer of indirection makes it hard to slip in agendas.
If someone wants to convince you of an incorrect, object-level thing, and they’re talking to you about what’s optimal, then they can argue for it directly. If they’re talking about what’s rational, they’ll they have to fool you twice—first in getting you to accept a decision procedure that produces it, and again in getting you to accept the thing itself.
Hmm..Could you give an example? I think I disagree because it might be easy to get people to swallow an abstract argument about decisions, and the object-level thing might just be a hop, skip and a jump from that. Getting people to swallow the object-level thing first could be harder because the objects can’t be disguised under abstract labels and hidden inferences. But again, an example of what you have in mind would illuminate things.
The jump is easy only if you happen to take ideas seriously. People compartmentalize by default, so they shouldn’t have much trouble “trusting” a decision procedure while at the same time finding excuses for why it wouldn’t work for them in a particular case.
If you do take ideas seriously, it will be harder to make you accept a shaky decision procedure at all: you would find too many examples in your own life where following it wouldn’t have worked.
This all sounds plausible, but I’d like an example.
It’s funny, though: Here we are disputing (abtractly) whether abstract or object-level discourse is more pliable to the pens of deceivers, and I’m insisting on a more object-level discussion. Ha.
I like the word “optimal”, and it frequently overlaps with “rational”, but they aren’t interchangeable. I think the key difference is that the word “rationality” carries along an implicit human actor with beliefs, knowledge and goals. On the other hand, the word “optimal” frequently ends up pointed at narrow, abstract goals, omniscient viewpoints and unavailable options.
“Investing in ABCD turned out to be suboptimal, but it was rational because he couldn’t have known their factory would be hit by an earthquake.” (Optimality is with respect to a state of perfect information, but past-tense rationality is with respect to the information that was actually available.)
”Sending a letter was suboptimal, but it was rational because I didn’t have an email address.” (Optimization is over an implied set of contact methods, rationality counts only those that are available.) “That algorithm is optimal, but using it would be irrational because it’s complicated. (Implied narrow goal: computational efficiency. Actual goal system includes simple implementation.)
Since talking about rationality frequently means talking about how to deal with limited information and clarifying confusions about goals, the word “optimal” doesn’t always work. There’s another important difference, too, which is that “rationality” is more meta; and this protects it somewhat, in that I’d expect a book about “optimal decision-making” to tell me what to do, and a book about “rational decision-making” to tell me how to decide. The latter is more trustworthy, since the layer of indirection makes it hard to slip in agendas.
Upvoted for the clear examples. But could you explain your final sentence?
If someone wants to convince you of an incorrect, object-level thing, and they’re talking to you about what’s optimal, then they can argue for it directly. If they’re talking about what’s rational, they’ll they have to fool you twice—first in getting you to accept a decision procedure that produces it, and again in getting you to accept the thing itself.
Hmm..Could you give an example? I think I disagree because it might be easy to get people to swallow an abstract argument about decisions, and the object-level thing might just be a hop, skip and a jump from that. Getting people to swallow the object-level thing first could be harder because the objects can’t be disguised under abstract labels and hidden inferences. But again, an example of what you have in mind would illuminate things.
The jump is easy only if you happen to take ideas seriously. People compartmentalize by default, so they shouldn’t have much trouble “trusting” a decision procedure while at the same time finding excuses for why it wouldn’t work for them in a particular case.
If you do take ideas seriously, it will be harder to make you accept a shaky decision procedure at all: you would find too many examples in your own life where following it wouldn’t have worked.
This all sounds plausible, but I’d like an example.
It’s funny, though: Here we are disputing (abtractly) whether abstract or object-level discourse is more pliable to the pens of deceivers, and I’m insisting on a more object-level discussion. Ha.