I think you are using “rational” with two different meanings. If looking down will cause you to freeze and panic, then the rational thing is not to look down. If knowledge of the fact you’re taking sugar pills destroys the placebo effect, then the rational thing is not to know you’re taking sugar pills (assuming you’ve exhausted all other options). It’s either that, or directly hacking your brain.
A better way to describe this might be to call these phenomena “irrational feelings”, “irrational reactions”, etc. The difference is, they’re all unintentional. So while you’re always rational in your intentional actions, you can still be unintentionally affected by some irrational feelings or reactions. And you correct for those unintentional reactions (which supposedly you can’t just simply remove) by changing your intentional ones (i.e. you intentionally and rationally decide not to look down, because you know you will otherwise be affected by the “irrational reaction” of panicking).
Ah, but you can’t choose not to know about the sugar pills. At most, you can choose not to investigate a therapy that seems to be working.
But in terms of developing and extending your powers of rationality, you can’t embrace a delusion while at the same time working to be a better rationalist. You have to decide between the spurious benefits of a possible placebo, and being rational.
Since the placebo effect has mostly do with how you feel about how you feel, it wouldn’t be very important in any case.
I don’t see much of a problem with this. As rationalists, we primarily want to be instrumentally rational. Scratch that, it’s the only thing we want (intrinsically). Being epistemically rationally just happens to be the best way to achieve our ends in a large percentage of cases. It also may have a direct component in our utility function, but that’s another issue.
Efficiency, at least the way I’m using the term, is relative to our values. If we don’t want to use the most efficient method possible to achieve something, then something about that method causes it to have a negative term in our utility function which is just large enough to make another alternative look better. So then it really isn’t the most efficient alternative we have.
I think you are using “rational” with two different meanings. If looking down will cause you to freeze and panic, then the rational thing is not to look down. If knowledge of the fact you’re taking sugar pills destroys the placebo effect, then the rational thing is not to know you’re taking sugar pills (assuming you’ve exhausted all other options). It’s either that, or directly hacking your brain.
A better way to describe this might be to call these phenomena “irrational feelings”, “irrational reactions”, etc. The difference is, they’re all unintentional. So while you’re always rational in your intentional actions, you can still be unintentionally affected by some irrational feelings or reactions. And you correct for those unintentional reactions (which supposedly you can’t just simply remove) by changing your intentional ones (i.e. you intentionally and rationally decide not to look down, because you know you will otherwise be affected by the “irrational reaction” of panicking).
Ah, but you can’t choose not to know about the sugar pills. At most, you can choose not to investigate a therapy that seems to be working.
But in terms of developing and extending your powers of rationality, you can’t embrace a delusion while at the same time working to be a better rationalist. You have to decide between the spurious benefits of a possible placebo, and being rational.
Since the placebo effect has mostly do with how you feel about how you feel, it wouldn’t be very important in any case.
Let’s just be clear. You are very near equivocating on rational. There are two basic definitions, though it may be natural to add more for some reason. Essentially what you are pointing out is that sometimes it’s instrumentally rational to be epistemically irrational.
I don’t see much of a problem with this. As rationalists, we primarily want to be instrumentally rational. Scratch that, it’s the only thing we want (intrinsically). Being epistemically rationally just happens to be the best way to achieve our ends in a large percentage of cases. It also may have a direct component in our utility function, but that’s another issue.
There is another definition, one better than either of those two, not only because it is more useful but because it is generally used and recognized.
With sufficiently limited resources, it can be rational (in that sense) to be irrational, if the available resources are sufficiently limited.
I think you forgot to mention what that definition is.
Seriously, Annoyance, it wouldn’t kill you to link to your own post. Sheesh.
“As rationalists, we primarily want to be instrumentally rational. Scratch that, it’s the only thing we want (intrinsically).”
No. I’m not sure why you believe that our wants are outside the domain of rationality’s influence, but they are not.
Wants are outside the domain of instrumental rationality—by definition. In other words, instrumental rationality can be applied to any goal.
The only thing we want is to get the things that we want in the most efficient way possible. In other words, to be instrumentally rational.
If what we want is to reach our wants without using the most efficient way possible, what method should we use?
Efficiency, at least the way I’m using the term, is relative to our values. If we don’t want to use the most efficient method possible to achieve something, then something about that method causes it to have a negative term in our utility function which is just large enough to make another alternative look better. So then it really isn’t the most efficient alternative we have.