Eliezer, it makes me nervous when my behavior or reasoning differs from the vast majority of human beings. Surely that’s a reasonable concern? Knowing that people are crazy and the world is mad helps a bit, but not too much because people who are even crazier than average probably explain their disagreements with the world in exactly this way.
So, I’m inclined to try to find more detailed explanations of the differences. Is there any reason you can think of why that might be unproductive, or otherwise a bad idea?
Eliezer, it makes me nervous when my behavior or reasoning differs from the vast majority of human beings. Surely that’s a reasonable concern?
On this planet? No. On this planet, I think you’re better off just worrying about the object-level state of the evidence. Your visceral nervousness has nothing to do with Aumann. It is conformity.
Knowing that people are crazy and the world is mad helps a bit, but not too much because people who are even crazier than average probably explain their disagreements with the world in exactly this way.
What do you care what people who are crazier than average do? You already have enough information to know you’re not one of them. You care what these people do, not because you really truly seriously think you might be one of them, but because of the gut-level, bone-deep fear of losing status by seeming to affiliate with a low-prestige group by saying something that sounds similar to what they say. You may be reluctant to admit that you know perfectly well you’re not in this group, because that also sounds like something this low-prestige group would say; but in real life, you have enough info, you know you have enough info, and the thought has not seriously crossed your mind in a good long while, whatever your dutiful doubts of your foregone conclusion.
Seriously, just make the break, clean snap, over and done.
So, I’m inclined to try to find more detailed explanations of the differences. Is there any reason you can think of why that might be unproductive, or otherwise a bad idea?
Occam’s Imaginary Razor. Spending lots of time on the meta-level explaining away what other people think is bad for your mental health.
You’re wrong, Elizer. I am sure that I’m not crazier than average, and I’m not reluctant to admit that. But in order to disagree with most of the world, I have to have good reason to think that I’m more rational than everyone I disagree with, or have some other explanation that lets me ignore Aumann. The only reason I referred to people who are crazier than average is to explain why “people are crazy, the world is mad” is not one of those explanations.
Spending lots of time on the meta-level explaining away what other people think is bad for your mental health.
That’s only true if I’m looking for rationalizations, instead of real explanations, right? If so, noted, and I’ll try to be careful.
But in order to disagree with most of the world, I have to have good reason to think that I’m more rational than everyone I disagree with
You’re more rational than the vast majority of people you disagree with. There, I told you up front. Is that reason enough? I can understand why you’d doubt yourself, but why should you doubt me?
That’s only true if I’m looking for rationalizations, instead of real explanations, right? If so, noted, and I’ll try to be careful.
I’m not saying that you should deliberately stay ignorant or avoid thinking about it, but I suspect that some of the mental health effects of spending lots of time analyzing away other people’s disagreements would happen to you even if you miraculously zeroed in on the true answer every time. Which you won’t. So it may not be wise to deliberately invest extra thought-time here.
Or maybe divide healthy and risky as follows: Healthy is what you do when you have a serious doubt and are moving to resolve it, for example by reading more of the literature, not to fulfill a duty or prove something to yourself, but because you seriously think there may be stuff out there you haven’t read. Risky is anything you do because you want to have investigated in order to prove your own rationality to yourself, or because it would feel too immodest to just think outright that you had the right answer.
The only reason I referred to people who are crazier than average is to explain why “people are crazy, the world is mad” is not one of those explanations.
It is if you stick to the object level. Does it help if I rephrase it as “People are crazy, the world is mad, therefore everyone has to show their work”? You just shouldn’t have to spend all that much effort to suppose that a large number of people have been incompetent. It happens so frequently that if there were a Shannon code for describing Earth, “they’re nuts” would have a single-symbol code in the language. Now, if you seriously don’t know whether someone else knows something you don’t, then figure out where to look and look there. But the answer may just be “4”, which stands for Standard Explanation #4 in the Earth Description Language: “People are crazy, the world is mad”. And in that case, spending lots of effort in order to develop an elaborate dismissal of their reasons is probably not good for your mental health and will just slow you down later if it turns out they did know something else. If by a flash of insight you realize there’s a compact description of a mistake that a lot of other people are making, then this is a valuable thing to know so you can avoid it yourself; but I really think it’s important to learn how to just say “4” and move on.
I will come as a surprise to few people that I disagree strongly with Eliezer here; Wei should not take his word for the claim that Wei is so much more rational than all the folks he might disagree with that he can ignore their differing opinions. Where is this robust rationality test used to compare Wei to the rest of the intellectual world? Where is the evidence for this supposed mental health risk of considering the important evidence of the opinions of other? If the world is crazy, then very likely so are you. Yes it is a good sign if you can show some of your work, but you can almost never show all of your relevant work. So we must make inferences about the thought we have not seen.
Well, I think we both agree on the dangers of a wide variety of cheap talk—or to put it more humbly, you taught me on the subject. Though even before then, I had developed the unfortunate personal habit of calling people’s bluffs.
So while we can certainly interpret talk about modesty and immodesty in terms of rhetoric, isn’t the main testable prediction at stake, the degree to which Wei Dai should often find, on further investigation, that people who disagree with him turn out to have surprisingly good reasons to do so?
Do you think—to jump all the way back to the original question—that if Dai went around asking people “Why aren’t you working on decision theory and anthropics because you can’t stand not knowing the answers?” that they would have some brilliantly decisive comeback that Dai never thought of which makes Dai realize that he shouldn’t be spending time on the topic either? What odds would you bet at?
Brilliant decisive reasons are rare for most topics, and most people can’t articulate very many of their reasons for most of their choices. Their most common reason would probably be that they found other topics more interesting, and to evaluate that reason Wei would have to understand the reasons for thinking all those other topics interesting. Saying “if you can’t prove to me why I’m wrong in ten minutes I must be right” is not a very reliable path to truth.
I typically class these types of questions with other similar ones:
What are the odds that a strategy of approximately continuous insanity, interrupted by clear thinking, is a better evolutionary adaptation than continuous sanity, interrupted by short bursts of madness? That the first, in practical, real-world terms, causes me to lead a more moral or satisfying life? Or even, that the computational resources that my brain provides to me as black boxes, can only be accessed at anywhere near peak capacity when I am functioning in a state of madness?
Is it easier to be sane, emulating insanity when required to, or to be insane, emulating sanity when required to?
Given that we’re sentient products of evolution, shouldn’t we expect a lot of variation in our thinking?
Finding solutions to real world problems often involve searching through a space of possibilities that is too big and too complex to search systematically and exhaustively. Evolution optimizes searches in this context by using a random search with many trials: inherent variation among zillions of modular components. I hypothesize that we individually think in non-rational ways so that as a population we search through state space for solutions in a more random way.
Observing the world for 32-odd years, it appears to me that each human being is randomly imprinted with a way of thinking and a set of ideas to obsess about. (Einstein had a cluster of ideas that were extremely useful for 20th century physics, most people’s obsessions aren’t historically significant.)
I hypothesize that we individually think in non-rational ways so that as a population we search through state space for solutions in a more random way.
Is it necessarily? Consider a population dominated by individuals with an allele for thinking in a uniform fashion. Then insert individuals who will come up with original ideas. A lot of the original ideas are going to be false, but some of them might hit the right spot and confer an advantage. It’s a risky, high variance strategy—the bearers of the originality alleles might not end up as the majority, but might not be selected out of the population either.
Sure, you can resurrect it as a high-variance high-expected-value individual strategy with polymorphism maintained by frequency-dependent selection… but then there’s still no reason to expect original thinking to be less rational thinking. And the original hypothesis was indeed group selection, so byrnema loses the right to talk about evolutionary psychology for one month or something. http://wiki.lesswrong.com/wiki/Group_selection
It seems to be extremely popular among a certain sort of amateur evolutionary theorist, though—there’s a certain sort of person who, if they don’t know about the incredible mathematical difficulty, will find it very satisfying to speculate about adaptations for the good of the group.
That’s me. I don’t know anything about evolutionary biology—I’m not even an amateur. Group selection sounded quite reasonable to me, and now I know that it isn’t borne by observation or the math. I can’t jump into evolutionary arguments; moratorium accepted.
“As a result many are beginning to recognize that group selection, or more appropriately multilevel selection, is potentially an important force in evolution.”
I’m no evo-bio expert, but it seems like you could make it work as something of a kin selection strategy too. If you don’t think exactly like your family, then when your family does something collaborative, the odds that one of you has the right idea is higher. Families do often work together on tasks; the more the family that thinks differently succeeds, the better they and their think-about-random-nonconforming-things genes do. Or does assuming that families will often collaborate and postulating mechanisms to make that go well count as a group selection hypothesis?
Anecdotally, it seems to me that across tribes and families, people are less likely to try to occupy a niche that already looks filled. (Which of course would be a matter of individual advantage, not tribal advantage!) Some of the people around me may have failed to enter their area of greatest comparative advantage, because even though they were smarter than average, I looked smarter.
Example anecdote: A close childhood friend who wanted to be a lawyer was told by his parents that he might not be smart enough because “he’s not Eliezer Yudkowsky”. I heard this, hooted, and told my friend to tell his parents that I said he was plenty smart enough. He became a lawyer.
They search in the same way because random sampling via variability is an effective way to search. However, humans could perform effective searches by variation at the individual or population level (for example, a sentient creature could model all different kinds of thought to think of different solutions) but I was arguing for the variation at the population level.
Variability at the population level is explained by the fact that we are products of evolution.
Of course, human searches are effective as a result of both kinds of variation.
Not that any of this was thought out before your question… This the usual networked-thought-reasoning verses linear-written-argument mapping problem.
random sampling via variability is an effective way to search
No it’s not. It is one of the few search methods that are simple enough to understand without reading an AI textbook, so a lot of nontechnical people know about it and praise it and assign too much credit to it. And there are even a few problem classes where it works well, though what makes a problem this “easy” is hard to understand without reading an AI textbook. But no, it’s not a very impressive kind of search.
Heh, I came to a similar thought walking home after asking the question… that it seems at least plausible the only kinda powerful optimization processes that are simple enough to pop up randomlyish are the ones that do random sampling via variability.
I’m not sure it makes sense that variability at the population level is much explained by coming from evolution, though. Seems to me, as a bound, we just don’t have enough points in the search space to be worth it even with 6b minds, and especially not down at the population levels during most of evolution. Then there’s the whole difficulty with group selection, of course. My intuition says no… yours says yes though?
Eliezer, it makes me nervous when my behavior or reasoning differs from the vast majority of human beings. Surely that’s a reasonable concern? Knowing that people are crazy and the world is mad helps a bit, but not too much because people who are even crazier than average probably explain their disagreements with the world in exactly this way.
So, I’m inclined to try to find more detailed explanations of the differences. Is there any reason you can think of why that might be unproductive, or otherwise a bad idea?
On this planet? No. On this planet, I think you’re better off just worrying about the object-level state of the evidence. Your visceral nervousness has nothing to do with Aumann. It is conformity.
What do you care what people who are crazier than average do? You already have enough information to know you’re not one of them. You care what these people do, not because you really truly seriously think you might be one of them, but because of the gut-level, bone-deep fear of losing status by seeming to affiliate with a low-prestige group by saying something that sounds similar to what they say. You may be reluctant to admit that you know perfectly well you’re not in this group, because that also sounds like something this low-prestige group would say; but in real life, you have enough info, you know you have enough info, and the thought has not seriously crossed your mind in a good long while, whatever your dutiful doubts of your foregone conclusion.
Seriously, just make the break, clean snap, over and done.
Occam’s Imaginary Razor. Spending lots of time on the meta-level explaining away what other people think is bad for your mental health.
You’re wrong, Elizer. I am sure that I’m not crazier than average, and I’m not reluctant to admit that. But in order to disagree with most of the world, I have to have good reason to think that I’m more rational than everyone I disagree with, or have some other explanation that lets me ignore Aumann. The only reason I referred to people who are crazier than average is to explain why “people are crazy, the world is mad” is not one of those explanations.
That’s only true if I’m looking for rationalizations, instead of real explanations, right? If so, noted, and I’ll try to be careful.
You’re more rational than the vast majority of people you disagree with. There, I told you up front. Is that reason enough? I can understand why you’d doubt yourself, but why should you doubt me?
I’m not saying that you should deliberately stay ignorant or avoid thinking about it, but I suspect that some of the mental health effects of spending lots of time analyzing away other people’s disagreements would happen to you even if you miraculously zeroed in on the true answer every time. Which you won’t. So it may not be wise to deliberately invest extra thought-time here.
Or maybe divide healthy and risky as follows: Healthy is what you do when you have a serious doubt and are moving to resolve it, for example by reading more of the literature, not to fulfill a duty or prove something to yourself, but because you seriously think there may be stuff out there you haven’t read. Risky is anything you do because you want to have investigated in order to prove your own rationality to yourself, or because it would feel too immodest to just think outright that you had the right answer.
It is if you stick to the object level. Does it help if I rephrase it as “People are crazy, the world is mad, therefore everyone has to show their work”? You just shouldn’t have to spend all that much effort to suppose that a large number of people have been incompetent. It happens so frequently that if there were a Shannon code for describing Earth, “they’re nuts” would have a single-symbol code in the language. Now, if you seriously don’t know whether someone else knows something you don’t, then figure out where to look and look there. But the answer may just be “4”, which stands for Standard Explanation #4 in the Earth Description Language: “People are crazy, the world is mad”. And in that case, spending lots of effort in order to develop an elaborate dismissal of their reasons is probably not good for your mental health and will just slow you down later if it turns out they did know something else. If by a flash of insight you realize there’s a compact description of a mistake that a lot of other people are making, then this is a valuable thing to know so you can avoid it yourself; but I really think it’s important to learn how to just say “4” and move on.
I will come as a surprise to few people that I disagree strongly with Eliezer here; Wei should not take his word for the claim that Wei is so much more rational than all the folks he might disagree with that he can ignore their differing opinions. Where is this robust rationality test used to compare Wei to the rest of the intellectual world? Where is the evidence for this supposed mental health risk of considering the important evidence of the opinions of other? If the world is crazy, then very likely so are you. Yes it is a good sign if you can show some of your work, but you can almost never show all of your relevant work. So we must make inferences about the thought we have not seen.
Well, I think we both agree on the dangers of a wide variety of cheap talk—or to put it more humbly, you taught me on the subject. Though even before then, I had developed the unfortunate personal habit of calling people’s bluffs.
So while we can certainly interpret talk about modesty and immodesty in terms of rhetoric, isn’t the main testable prediction at stake, the degree to which Wei Dai should often find, on further investigation, that people who disagree with him turn out to have surprisingly good reasons to do so?
Do you think—to jump all the way back to the original question—that if Dai went around asking people “Why aren’t you working on decision theory and anthropics because you can’t stand not knowing the answers?” that they would have some brilliantly decisive comeback that Dai never thought of which makes Dai realize that he shouldn’t be spending time on the topic either? What odds would you bet at?
Brilliant decisive reasons are rare for most topics, and most people can’t articulate very many of their reasons for most of their choices. Their most common reason would probably be that they found other topics more interesting, and to evaluate that reason Wei would have to understand the reasons for thinking all those other topics interesting. Saying “if you can’t prove to me why I’m wrong in ten minutes I must be right” is not a very reliable path to truth.
I’d expect a lot of people to answer “Nobody is paying me to work on it.”
I typically class these types of questions with other similar ones:
What are the odds that a strategy of approximately continuous insanity, interrupted by clear thinking, is a better evolutionary adaptation than continuous sanity, interrupted by short bursts of madness? That the first, in practical, real-world terms, causes me to lead a more moral or satisfying life? Or even, that the computational resources that my brain provides to me as black boxes, can only be accessed at anywhere near peak capacity when I am functioning in a state of madness?
Is it easier to be sane, emulating insanity when required to, or to be insane, emulating sanity when required to?
Given that we’re sentient products of evolution, shouldn’t we expect a lot of variation in our thinking?
Finding solutions to real world problems often involve searching through a space of possibilities that is too big and too complex to search systematically and exhaustively. Evolution optimizes searches in this context by using a random search with many trials: inherent variation among zillions of modular components. I hypothesize that we individually think in non-rational ways so that as a population we search through state space for solutions in a more random way.
Observing the world for 32-odd years, it appears to me that each human being is randomly imprinted with a way of thinking and a set of ideas to obsess about. (Einstein had a cluster of ideas that were extremely useful for 20th century physics, most people’s obsessions aren’t historically significant.)
That’s a group selection argument.
GAME OVER
Is it necessarily? Consider a population dominated by individuals with an allele for thinking in a uniform fashion. Then insert individuals who will come up with original ideas. A lot of the original ideas are going to be false, but some of them might hit the right spot and confer an advantage. It’s a risky, high variance strategy—the bearers of the originality alleles might not end up as the majority, but might not be selected out of the population either.
Sure, you can resurrect it as a high-variance high-expected-value individual strategy with polymorphism maintained by frequency-dependent selection… but then there’s still no reason to expect original thinking to be less rational thinking. And the original hypothesis was indeed group selection, so byrnema loses the right to talk about evolutionary psychology for one month or something. http://wiki.lesswrong.com/wiki/Group_selection
That’s me. I don’t know anything about evolutionary biology—I’m not even an amateur. Group selection sounded quite reasonable to me, and now I know that it isn’t borne by observation or the math. I can’t jump into evolutionary arguments; moratorium accepted.
See:
“As a result many are beginning to recognize that group selection, or more appropriately multilevel selection, is potentially an important force in evolution.”
http://en.wikipedia.org/wiki/Group_selection
I’m no evo-bio expert, but it seems like you could make it work as something of a kin selection strategy too. If you don’t think exactly like your family, then when your family does something collaborative, the odds that one of you has the right idea is higher. Families do often work together on tasks; the more the family that thinks differently succeeds, the better they and their think-about-random-nonconforming-things genes do. Or does assuming that families will often collaborate and postulating mechanisms to make that go well count as a group selection hypothesis?
Anecdotally, it seems to me that across tribes and families, people are less likely to try to occupy a niche that already looks filled. (Which of course would be a matter of individual advantage, not tribal advantage!) Some of the people around me may have failed to enter their area of greatest comparative advantage, because even though they were smarter than average, I looked smarter.
Example anecdote: A close childhood friend who wanted to be a lawyer was told by his parents that he might not be smart enough because “he’s not Eliezer Yudkowsky”. I heard this, hooted, and told my friend to tell his parents that I said he was plenty smart enough. He became a lawyer.
THAT had a tragic ending!
Why would evolution’s search results tend to search in the same way evolution searches?
They search in the same way because random sampling via variability is an effective way to search. However, humans could perform effective searches by variation at the individual or population level (for example, a sentient creature could model all different kinds of thought to think of different solutions) but I was arguing for the variation at the population level.
Variability at the population level is explained by the fact that we are products of evolution.
Of course, human searches are effective as a result of both kinds of variation.
Not that any of this was thought out before your question… This the usual networked-thought-reasoning verses linear-written-argument mapping problem.
No it’s not. It is one of the few search methods that are simple enough to understand without reading an AI textbook, so a lot of nontechnical people know about it and praise it and assign too much credit to it. And there are even a few problem classes where it works well, though what makes a problem this “easy” is hard to understand without reading an AI textbook. But no, it’s not a very impressive kind of search.
Heh, I came to a similar thought walking home after asking the question… that it seems at least plausible the only kinda powerful optimization processes that are simple enough to pop up randomlyish are the ones that do random sampling via variability.
I’m not sure it makes sense that variability at the population level is much explained by coming from evolution, though. Seems to me, as a bound, we just don’t have enough points in the search space to be worth it even with 6b minds, and especially not down at the population levels during most of evolution. Then there’s the whole difficulty with group selection, of course. My intuition says no… yours says yes though?