...I stole this example from Julia Galef’s talk “The Straw Vulcan.” Her second example of “straw man rationality” or Hollywood Rationality is the idea that you shouldn’t make a decision until you have all the information you need. This one shows up in Star Trek, too. Giant space amoebas have appeared not far from the Enterprise, and Kirk asks Spock for his analysis. Spock replies: “I have no analysis due to insufficient information… The computers contain nothing on this phenomenon. It is beyond our experience, and the new information is not yet significant.”
Sometimes it’s rational to seek more information before acting, but sometimes you need to just act on what you think you know. You have to weigh the cost of getting more information with the expected value of that information. Consider another example from Gerd Gigerenzer, about a man considering whom to marry:
...He would have to look at the probabilities of various consequences of marrying each of them — whether the woman would still talk to him after they’re married, whether she’d take care of their children, whatever is important to him — and the utilites of each of these… After many years of research he’d probably find out that his final choice had already married another person who didn’t do these computations, and actually just fell in love with her.
Edit: I actually agree with you and muflax that some of this rationality-stuff is pretty problematic and/or impossible to apply in real life. But I think Yudkwosky’s visit to New York is a bad example to illustrate these concerns.
Sometimes it’s rational to seek more information before acting, but sometimes you need to just act on what you think you know. You have to weigh the cost of getting more information with the expected value of that information.
How does this change anything? You are still left with an expected utility calculation. In this case it is the expected utility of gathering more information.
From the article:
If intuition will give you better results than slow, deliberative reasoning, then rationally you should use intuition.
Again, this changes nothing. In this case you will have to calculate the expected utility of using your intuition. Which seems just as impossible to me. All you can possible do is to use your intuition to decide if you should use your intuition.
Consider someone told you that their intuition says that they should not act on the information they have about risks from AI and that the value of seeking more information is too low because they don’t expect to find any information that would change their mind at this point. Then how could you possible come to an agreement with them about risks from AI if you both rely on your intuitions?
The post you linked to talks a lot about “winning”. But if you define rationality in terms of winning, then how exactly are you going to figure out what is “rational” without any information about how reliable your intuitions or heuristics are in a special situation?
The article seems to argue against a frequentist approach when it is the only way to decide which is the winning strategy. Otherwise, if you are not willing to wait for new information, you rely on your intuition in any case, whether your choice is to use the principle of expected utility or to rely on your intuition.
In other words, if a frequentist approach is impossible you could as well just say that you “feel” you are right. Not that it is rational to do it.
All you can possible do is to use your intuition to decide if you should use your intuition.
Yes, you have to start with something more basic than expected utility calculation, or you run into an infinite regress. Expected utility calculations are tools and you use them to achieve your goals more effectively. If you want to shoot yourself in the foot, nobody can prevent you from doing so.
Consider someone told you that their intuition says that they should not act on the information they have about risks from AI and that the value of seeking more information is too low because they don’t expect to find any information that would change their mind at this point. Then how could you possible come to an agreement with them about risks from AI if you both rely on your intuitions?
You can’t reach an agreement. Humans (or minds in general) with widely divergent intuitions or epistemological standards have very different beliefs and it can be impossible for them to come to an agreement. There are no universally compelling arguments that convince all possible minds.
I don’t see how it’s impossible to assign probabilities by using your intuitions. “Go ahead and pick a number out of the air, but then be very willing to revise it upon the slightest evidence that it doesn’t fit will with your other numbers.”
Again, this changes nothing. In this case you will have to calculate the expected utility of using your intuition. Which seems just as impossible to me.
I totally agree that it’s impossible exactly. So people use approximations everywhere. The trigger for the habit is thinking something like “Moving to California is a big decision.” Then you think “Is there a possibility for a big gain if I use more deliberative reasoning?” Then, using a few heuristics, you may answer “yes.” And so on, approximating at every step, since that’s the only way to get anything done.
Hm, that seems to be more in the context of “patching over” ideas that are mostly right but have some problems. I’m talking about “fixing” theories that are exactly right but impossible to apply.
One of the more interesting experiences I’ve had learning about physics is how much of our understanding of physics is a massive oversimplification, because it’s just too hard to calculate the optimal answer. Most nobel prize winning work comes not from new laws of physics, but from figuring out how to approximate those laws in a way that is complicated enough to be useful but just simple enough to be solvable. And so with rationality in this case, I think. The high-importance rationality work is not about new laws of rationality or strange but easy stuff, but about approximations of rationality that are complicated enough to be useful but simple enough to be solvable.
I’ll just quote Lukeprog’s “Facing the Singularity, chapter 3” because I’m lazy.
Edit: I actually agree with you and muflax that some of this rationality-stuff is pretty problematic and/or impossible to apply in real life. But I think Yudkwosky’s visit to New York is a bad example to illustrate these concerns.
How does this change anything? You are still left with an expected utility calculation. In this case it is the expected utility of gathering more information.
From the article:
Again, this changes nothing. In this case you will have to calculate the expected utility of using your intuition. Which seems just as impossible to me. All you can possible do is to use your intuition to decide if you should use your intuition.
Consider someone told you that their intuition says that they should not act on the information they have about risks from AI and that the value of seeking more information is too low because they don’t expect to find any information that would change their mind at this point. Then how could you possible come to an agreement with them about risks from AI if you both rely on your intuitions?
The post you linked to talks a lot about “winning”. But if you define rationality in terms of winning, then how exactly are you going to figure out what is “rational” without any information about how reliable your intuitions or heuristics are in a special situation?
The article seems to argue against a frequentist approach when it is the only way to decide which is the winning strategy. Otherwise, if you are not willing to wait for new information, you rely on your intuition in any case, whether your choice is to use the principle of expected utility or to rely on your intuition.
In other words, if a frequentist approach is impossible you could as well just say that you “feel” you are right. Not that it is rational to do it.
Yes, you have to start with something more basic than expected utility calculation, or you run into an infinite regress. Expected utility calculations are tools and you use them to achieve your goals more effectively. If you want to shoot yourself in the foot, nobody can prevent you from doing so.
You can’t reach an agreement. Humans (or minds in general) with widely divergent intuitions or epistemological standards have very different beliefs and it can be impossible for them to come to an agreement. There are no universally compelling arguments that convince all possible minds.
I don’t see how it’s impossible to assign probabilities by using your intuitions. “Go ahead and pick a number out of the air, but then be very willing to revise it upon the slightest evidence that it doesn’t fit will with your other numbers.”
I totally agree that it’s impossible exactly. So people use approximations everywhere. The trigger for the habit is thinking something like “Moving to California is a big decision.” Then you think “Is there a possibility for a big gain if I use more deliberative reasoning?” Then, using a few heuristics, you may answer “yes.” And so on, approximating at every step, since that’s the only way to get anything done.
You mean something along the lines of what I have written here?
Hm, that seems to be more in the context of “patching over” ideas that are mostly right but have some problems. I’m talking about “fixing” theories that are exactly right but impossible to apply.
One of the more interesting experiences I’ve had learning about physics is how much of our understanding of physics is a massive oversimplification, because it’s just too hard to calculate the optimal answer. Most nobel prize winning work comes not from new laws of physics, but from figuring out how to approximate those laws in a way that is complicated enough to be useful but just simple enough to be solvable. And so with rationality in this case, I think. The high-importance rationality work is not about new laws of rationality or strange but easy stuff, but about approximations of rationality that are complicated enough to be useful but simple enough to be solvable.