If you want to be thought of as someone who would take revenge, then it’s rational to do what you can to obtain this status, which may or may not include actually taking revenge (you could boast about taking revenge on someone that no one you lied to is likely to meet, for example).
If fMRI exams will detect that you will not actually take revenge, then faking it is impossible.
As for being subjected to a fMRI exam, I don’t see how it’s relevant. If nothing you can possibly do can have any effect on the result of the exam, then rationality (or irrationality) doesn’t enter into it.
Try reading the post again. The question at issue is whether rationality always wins. If nothing you can do can make you, the rationalist, win, then rationality loses. That’s part of the point.
Try reading the post again. The question at issue is whether rationality always wins. If nothing you can do can make you, the rationalist, win, then rationality loses. That’s part of the point.
Then it’s a trivially obvious point. There’s no need to talk about mind-reading deities and fMRI exams; any scenario where the rationalist doesn’t get what he wants because of circumstances beyond his control would be an equivalent example:
If a rationalist is fired because of the economic depression, then rationality ‘loses’.
If a rationalist’s wife leaves him because she’s discovered she’s a lesbian, then rationality ‘loses’.
If a rationalist is hit by a meteor, then rationality ‘loses’.
What makes your fMRI example seem different is that the thing that’s beyond our control is having the kind of brain that leads to rational decision-making. This doesn’t change the fact that we never had the opportunity to make a decision.
In your example, it’s true that a person would be unemployable because he has the kind of brain that leads to rational decision-making. However, it’s false that this person would be unemployable because he made a rational decision (since he hasn’t made a decision of any kind).
Therefore, as far as rational behavior is concerned, a rationalist getting hit by a meteor and a rationalist being penalized because of a fMRI exam are equivalent scenarios.
Besides, being rational isn’t having a particular kind of brain, it’s behaving in a particular way, even according to your own definition, “optimizing expected selfish utility”. Optimizing is something that an agent does, it’s not a passive property of his brain.
If fMRI exams will detect that you will not actually take revenge, then faking it is impossible.
Try reading the post again. The question at issue is whether rationality always wins. If nothing you can do can make you, the rationalist, win, then rationality loses. That’s part of the point.
Then it’s a trivially obvious point. There’s no need to talk about mind-reading deities and fMRI exams; any scenario where the rationalist doesn’t get what he wants because of circumstances beyond his control would be an equivalent example:
If a rationalist is fired because of the economic depression, then rationality ‘loses’.
If a rationalist’s wife leaves him because she’s discovered she’s a lesbian, then rationality ‘loses’.
If a rationalist is hit by a meteor, then rationality ‘loses’.
What makes your fMRI example seem different is that the thing that’s beyond our control is having the kind of brain that leads to rational decision-making. This doesn’t change the fact that we never had the opportunity to make a decision.
A rationalist hit by a meteor was not hit because he was a rationalist. Completely different case.
The word “rationalist” is misleading here.
In your example, it’s true that a person would be unemployable because he has the kind of brain that leads to rational decision-making. However, it’s false that this person would be unemployable because he made a rational decision (since he hasn’t made a decision of any kind).
Therefore, as far as rational behavior is concerned, a rationalist getting hit by a meteor and a rationalist being penalized because of a fMRI exam are equivalent scenarios.
Besides, being rational isn’t having a particular kind of brain, it’s behaving in a particular way, even according to your own definition, “optimizing expected selfish utility”. Optimizing is something that an agent does, it’s not a passive property of his brain.