This is a crazy idea that I’m not at all convinced about, but I’ll go ahead and post it anyway. Criticism welcome!
Rationality and common sense might be bad for your chances of achieving something great, because you need to irrationally believe that it’s possible at all. That might sound obvious, but such idealism can make the difference between failure and success even in science, and even at the highest levels.
For example, Descartes and Leibniz saw the world as something created by a benevolent God and full of harmony that can be discovered by reason. That’s a very irrational belief, but they ended up making huge advances in science by trying to find that harmony. In contrast, their opponents Hume, Hobbes, Locke etc. held a much more LW-ish position called “empiricism”. They all failed to achieve much outside of philosophy, arguably because they didn’t have a strong irrational belief that harmony could be found.
If you want to achieve something great, don’t be a skeptic about it. Be utterly idealistic.
In brainstorming, a common piece of advice is to let down your guard and just let the ideas flow without any filters or critical thinking, and then follow up with a review to select the best ones rationally. The concept here is that your brain has two distinct modes of operation, one for creativity and one for analysis, and that they don’t always play well together, so by separating their activities you improve the quality of your results. My personal approach mirrors this to some degree: I rapidly alternate between these two modes, starting with a new idea, then finding a problem with it, then proposing a fix, then finding a new problem, etc. Mutation, selection, mutation, selection… Evolution, of a sort.
Understanding is an interaction between an internal world model and observable evidence. Every world model contains “behind the scenes” components which are not directly verifiable and which serve to explain the more superficial phenomena. This is a known requirement to be able to model a partially observable environment. The irrational beliefs of Descartes and Leibniz which you describe motivated them to search for minimally complex indirect explanations that were consistent with observation. The empiricists were distracted by an excessive focus on the directly verifiable surface phenomena. Both aspects, however, are important parts of understanding. Without intangible behind-the-scenes components, it is impossible to build a complete model. But without the empirical demand for evidence, you may end up modeling something that isn’t really there. And the focus on minimal complexity as expressed by their search for “harmony” is another expression of Occam’s razor, which serves to improve the ability of the model to generalize to new situations.
A lot of focus is given to the scientific method’s demand for empirical evidence of a falsifiable hypothesis, but very little emphasis is placed on the act of coming up with those hypotheses in the first place. You won’t find any suggestions in most presentations of the scientific method as to how to create new hypotheses, or how to identify which new hypotheses are worth pursuing. And yet this creative part of the cycle is every bit as vital as the evidence gathering. Creativity is so poorly understood compared to rationality, despite being one of the two pillars, verification with evidence being the other, of scientific and technological advancement. By searching for harmony in nature, they were engaging in a pattern-matching process, searching the hypothesis space for good candidates for scientific evaluation. They were supplying the fuel that runs the scientific method. With a surplus of fuel, you can go a long way even with a less efficient engine. You might even be able to fuel other engines, too.
I would love to see some meta-scientific research as to which variants of the scientific method are most effective. Perhaps an artificial, partially observable environment whose full state and function are known to the meta researchers could be presented to non-meta researchers, as objects of study, to determine which habits and methods are most effective for identifying the true nature of the artificial environment. This would be like measuring the effectiveness of machine learning algorithms on a set of benchmark problems, but with humans in place of the algorithms. (It would be great, too, if the social aspect of scientific research were included in the study, effectively treating the scientific community as a distributed learning algorithm.)
X is a random variable, E is expected value (a.k.a. average), P is probability. For example, if X is uniformly distributed between 0 and 1, then EX=0.5 and P(X>0.75)=0.25.
Sarunas is saying that some action might not affect the average value, but strongly affect the chances of getting a very high or very low value (“swing for the fences” so to speak). For example, if we define Y as X rounded to the nearest integer (i.e. Y=0 if X0.5), then EY=0.5 and P(Y>0.75)=0.5. The average of Y is the same as the average of X, but the probability of getting an extreme value is higher.
This is probably obvious for others, but it wasn’t obvious for me that by paying 0.1 to go from the first game to the second one you both decrease your average earnings and increase the probability of high earnings.
Yeah, that’s one part of it. Another part is that some irrational beliefs can be beneficial even on average, though of course you need to choose such beliefs carefully. Believing that the world makes sense, in the context of doing research, might be one such example. I don’t know if there are others. Eliezer’s view of Bayesianism (“yay, I’ve found the eternal laws of reasoning!”) might be related here.
I wrote a post arguing that what is irrational overconfidence for an individual can be good for society. (In short, scientific knowledge is a public good, individual motivations to produce it is likely too low from a group perspective, and overconfidence increases individual motivation so it’s good.)
To extend this a bit, if society pays people to produce scientific knowledge (in money and/or status), then overconfident people would be willing to accept a lower “salary” and outcompete more rational individuals for the available positions, so we should expect that most science is produced by overconfident people. (This also applies to any other attribute that increases motivation to work on scientific problems, like intellectual curiosity.) As a corollary, people who produce science about rationality (e.g., decision theorists) are probably more overconfident than average, people who work at MIRI are probably more overconfident than average, etc.
The argument that overconfident people will be willing to accept lower compensation and so outcompete “more rational individuals” seems to be applicable very generally, from running a pizza parlour to working as a freelance programmer. So, is most everyone “more overconfident than average”?
Good point. :) I guess it actually has to be something more like “comparative overconfidence”, i.e., confidence in your own scientific ideas or assessment of your general ability to produce scientific output, relative to confidence in your other skills. Theoretical science (including e.g., decision theory, FAI theory) has longer and weaker feedback cycles than most business fields like running a pizza parlor, so if you start off overconfident in general, you can probably keep your overconfidence in your scientific ideas/skills longer than your business ideas/skills.
I think it is more interesting to study how to be simultaneously supermotivated about your objectives and realistic about the obstacles. Probably requires some dark arts techniques (e.g. compartmentalization). Personally I find that occasional mental invocations of quasireligious imagery are useful.
In other words, laziness and overconfidence bias cancel each other out, and getting rid of the second without getting rid of the first will cause problems?
Yeah, it’s a bit similar to growth mindset. Is it rational to believe in growth mindset if you know that believing in it with probability 100% makes it work with probability 50%? :-) I guess it only works if you’re good at compartmentalizing, which is itself an error by LW standards.
I think that very much depends on what you mean with rationality. The kind of rationality this community practices leads for better or worse to a bunch of people holding contrarian beliefs that certain things are possible that general society doesn’t consider to be possible.
In HPMOR you have “do the impossible”, “heroic responsibility” and “having something to protect” all as part of the curriculum.
Rationality and common sense might be bad for your chances of achieving something great, because you need to irrationally believe that it’s possible at all.
That is true.
If you want to achieve something great, don’t be a skeptic about it. Be utterly idealistic.
Well, umm… there is the slight issue of cost. If you are deliberately choosing a high-risk strategy to give yourself a chance of a huge payoff, you need to realize that the mode of outcomes is you failing. Convincing yourself that you are destined to become a famous actress does improve your chances of getting into the movies, but most people who believe this will end up as waitresses in LA.
It’s like “If you want to become a millionaire, you need to buy lottery tickets” :-/
Agreed on all points. It could still make sense to adopt high-risk beliefs if you’ve already decided that you want to work on something, and the expected payoff outweighs the cost. Friendly AI development might be one such area.
I don’t know if I would hate on Hume this much. Hume is pretty big.
I agree with your broader point, though, I think. At the highest levels, EVERYTHING has to go right, including having the hardware, and having a super work ethic, and having a synergistic morale (an irrationally huge view of own importance, etc.)
There are a lot of ways to be irrational, and if enough people are being irrational in different ways, at least some of them are bound to pay off. Using your example, some of the people with blind idealism may get stuck to an idea that they can accomplish, but most of them fail. The point of trying to be rational isn’t to do everything perfectly, but to systematically increasing your chances of succeeding, even though in some cases you might get unlucky.
Achieving something great may require your confidence in its possibility, but the reasonableness of that confidence is only discovered in hindsight. It’s not uncommon to stumble upon a true belief while having begun with wrong evidence.
That doesn’t seem like a correct translation. The idealistic belief that might be helpful for AI researchers is more like “a safe AI design is possible and simple”, rather than “all AI research is safe” or “safe AI research is easy”. There’s a large difference. I don’t think Descartes and Leibniz considered their task easy, but they needed the belief that it’s possible at all.
Okay. Be careful, but don’t be too afraid in your AI research. Above all, don’t just wait for MIRI and its AI projects, for they are more like Locke, Hume or Hobbes, than Leibniz or Descartes.
I’m not sure. There’s a bit of tension between LW-ish beliefs, which are mostly empiricist, and Eliezer’s popularity, which I think owes more to his idealistic and poetic attitude coupled with high intelligence. Maybe people should learn more from the latter, rather than the former :-)
Excuse me, but of course it is. To believe that it’s neither possible nor simple is to believe that human minds are so needlessly complex, viewed from the outside, that bottling up our model-forming and evaluation-forming processes for artificial processing is impossible or intractable.
The problem only looks hard because people are applying the wrong maps. They come at it with maps based on pure logic, microeconomic utility theory, and a priori moral philosophizing, when they really need the maps of statistical learning theory, computational cognitive science, and evaluative psychology.
Sometimes when things look really improbable, it’s because you’ve got a very biased prior.
This is a crazy idea that I’m not at all convinced about, but I’ll go ahead and post it anyway. Criticism welcome!
Rationality and common sense might be bad for your chances of achieving something great, because you need to irrationally believe that it’s possible at all. That might sound obvious, but such idealism can make the difference between failure and success even in science, and even at the highest levels.
For example, Descartes and Leibniz saw the world as something created by a benevolent God and full of harmony that can be discovered by reason. That’s a very irrational belief, but they ended up making huge advances in science by trying to find that harmony. In contrast, their opponents Hume, Hobbes, Locke etc. held a much more LW-ish position called “empiricism”. They all failed to achieve much outside of philosophy, arguably because they didn’t have a strong irrational belief that harmony could be found.
If you want to achieve something great, don’t be a skeptic about it. Be utterly idealistic.
In brainstorming, a common piece of advice is to let down your guard and just let the ideas flow without any filters or critical thinking, and then follow up with a review to select the best ones rationally. The concept here is that your brain has two distinct modes of operation, one for creativity and one for analysis, and that they don’t always play well together, so by separating their activities you improve the quality of your results. My personal approach mirrors this to some degree: I rapidly alternate between these two modes, starting with a new idea, then finding a problem with it, then proposing a fix, then finding a new problem, etc. Mutation, selection, mutation, selection… Evolution, of a sort.
Understanding is an interaction between an internal world model and observable evidence. Every world model contains “behind the scenes” components which are not directly verifiable and which serve to explain the more superficial phenomena. This is a known requirement to be able to model a partially observable environment. The irrational beliefs of Descartes and Leibniz which you describe motivated them to search for minimally complex indirect explanations that were consistent with observation. The empiricists were distracted by an excessive focus on the directly verifiable surface phenomena. Both aspects, however, are important parts of understanding. Without intangible behind-the-scenes components, it is impossible to build a complete model. But without the empirical demand for evidence, you may end up modeling something that isn’t really there. And the focus on minimal complexity as expressed by their search for “harmony” is another expression of Occam’s razor, which serves to improve the ability of the model to generalize to new situations.
A lot of focus is given to the scientific method’s demand for empirical evidence of a falsifiable hypothesis, but very little emphasis is placed on the act of coming up with those hypotheses in the first place. You won’t find any suggestions in most presentations of the scientific method as to how to create new hypotheses, or how to identify which new hypotheses are worth pursuing. And yet this creative part of the cycle is every bit as vital as the evidence gathering. Creativity is so poorly understood compared to rationality, despite being one of the two pillars, verification with evidence being the other, of scientific and technological advancement. By searching for harmony in nature, they were engaging in a pattern-matching process, searching the hypothesis space for good candidates for scientific evaluation. They were supplying the fuel that runs the scientific method. With a surplus of fuel, you can go a long way even with a less efficient engine. You might even be able to fuel other engines, too.
I would love to see some meta-scientific research as to which variants of the scientific method are most effective. Perhaps an artificial, partially observable environment whose full state and function are known to the meta researchers could be presented to non-meta researchers, as objects of study, to determine which habits and methods are most effective for identifying the true nature of the artificial environment. This would be like measuring the effectiveness of machine learning algorithms on a set of benchmark problems, but with humans in place of the algorithms. (It would be great, too, if the social aspect of scientific research were included in the study, effectively treating the scientific community as a distributed learning algorithm.)
Am I correct to paraphrase you this way: maximizing EX and maximizing P(X > a) are two different problems.
What are the meanings of these symbols “EX”, “P(X>a)”?
X is a random variable, E is expected value (a.k.a. average), P is probability. For example, if X is uniformly distributed between 0 and 1, then EX=0.5 and P(X>0.75)=0.25.
Sarunas is saying that some action might not affect the average value, but strongly affect the chances of getting a very high or very low value (“swing for the fences” so to speak). For example, if we define Y as X rounded to the nearest integer (i.e. Y=0 if X0.5), then EY=0.5 and P(Y>0.75)=0.5. The average of Y is the same as the average of X, but the probability of getting an extreme value is higher.
This is probably obvious for others, but it wasn’t obvious for me that by paying 0.1 to go from the first game to the second one you both decrease your average earnings and increase the probability of high earnings.
Yeah, that’s one part of it. Another part is that some irrational beliefs can be beneficial even on average, though of course you need to choose such beliefs carefully. Believing that the world makes sense, in the context of doing research, might be one such example. I don’t know if there are others. Eliezer’s view of Bayesianism (“yay, I’ve found the eternal laws of reasoning!”) might be related here.
Good point. It’s worth noting that you can use Markov’s inequality to relate the two.
I wrote a post arguing that what is irrational overconfidence for an individual can be good for society. (In short, scientific knowledge is a public good, individual motivations to produce it is likely too low from a group perspective, and overconfidence increases individual motivation so it’s good.)
To extend this a bit, if society pays people to produce scientific knowledge (in money and/or status), then overconfident people would be willing to accept a lower “salary” and outcompete more rational individuals for the available positions, so we should expect that most science is produced by overconfident people. (This also applies to any other attribute that increases motivation to work on scientific problems, like intellectual curiosity.) As a corollary, people who produce science about rationality (e.g., decision theorists) are probably more overconfident than average, people who work at MIRI are probably more overconfident than average, etc.
This starts to look like Lake Woebegon.
The argument that overconfident people will be willing to accept lower compensation and so outcompete “more rational individuals” seems to be applicable very generally, from running a pizza parlour to working as a freelance programmer. So, is most everyone “more overconfident than average”?
Good point. :) I guess it actually has to be something more like “comparative overconfidence”, i.e., confidence in your own scientific ideas or assessment of your general ability to produce scientific output, relative to confidence in your other skills. Theoretical science (including e.g., decision theory, FAI theory) has longer and weaker feedback cycles than most business fields like running a pizza parlor, so if you start off overconfident in general, you can probably keep your overconfidence in your scientific ideas/skills longer than your business ideas/skills.
I think it is more interesting to study how to be simultaneously supermotivated about your objectives and realistic about the obstacles. Probably requires some dark arts techniques (e.g. compartmentalization). Personally I find that occasional mental invocations of quasireligious imagery are useful.
Isn’t this the same or related to mental contrasting?
In other words, laziness and overconfidence bias cancel each other out, and getting rid of the second without getting rid of the first will cause problems?
Yes, if you think Hume’s problem was laziness :-)
Isn’t that growth mindset? (Is growth mindset not rational?)
Yeah, it’s a bit similar to growth mindset. Is it rational to believe in growth mindset if you know that believing in it with probability 100% makes it work with probability 50%? :-) I guess it only works if you’re good at compartmentalizing, which is itself an error by LW standards.
I think, it’s also similar to Newcomb’s Problem, but I’m failing to think of a 1:1 mapping from what you say to it.
As a data point to support this, overconfidence correlates positively with income.
I think that very much depends on what you mean with rationality. The kind of rationality this community practices leads for better or worse to a bunch of people holding contrarian beliefs that certain things are possible that general society doesn’t consider to be possible.
In HPMOR you have “do the impossible”, “heroic responsibility” and “having something to protect” all as part of the curriculum.
That is true.
Well, umm… there is the slight issue of cost. If you are deliberately choosing a high-risk strategy to give yourself a chance of a huge payoff, you need to realize that the mode of outcomes is you failing. Convincing yourself that you are destined to become a famous actress does improve your chances of getting into the movies, but most people who believe this will end up as waitresses in LA.
It’s like “If you want to become a millionaire, you need to buy lottery tickets” :-/
Yeah. I actually wrote a post about that :-)
Agreed on all points. It could still make sense to adopt high-risk beliefs if you’ve already decided that you want to work on something, and the expected payoff outweighs the cost. Friendly AI development might be one such area.
I don’t know if I would hate on Hume this much. Hume is pretty big.
I agree with your broader point, though, I think. At the highest levels, EVERYTHING has to go right, including having the hardware, and having a super work ethic, and having a synergistic morale (an irrationally huge view of own importance, etc.)
There are a lot of ways to be irrational, and if enough people are being irrational in different ways, at least some of them are bound to pay off. Using your example, some of the people with blind idealism may get stuck to an idea that they can accomplish, but most of them fail. The point of trying to be rational isn’t to do everything perfectly, but to systematically increasing your chances of succeeding, even though in some cases you might get unlucky.
Achieving something great may require your confidence in its possibility, but the reasonableness of that confidence is only discovered in hindsight. It’s not uncommon to stumble upon a true belief while having begun with wrong evidence.
Well, or find a way to bottle mania.
Hypomania would probably be better.
I would give a pretty for an sf novel about a society where hypomania/depression cycles were considered normal and accommodated.
To translate. Do not buy this LW-Yudkowsky AI mantra how hard, difficult and dangerous it is. Do it at home, for yourself, have no fear!
That doesn’t seem like a correct translation. The idealistic belief that might be helpful for AI researchers is more like “a safe AI design is possible and simple”, rather than “all AI research is safe” or “safe AI research is easy”. There’s a large difference. I don’t think Descartes and Leibniz considered their task easy, but they needed the belief that it’s possible at all.
Okay. Be careful, but don’t be too afraid in your AI research. Above all, don’t just wait for MIRI and its AI projects, for they are more like Locke, Hume or Hobbes, than Leibniz or Descartes.
I’m not sure. There’s a bit of tension between LW-ish beliefs, which are mostly empiricist, and Eliezer’s popularity, which I think owes more to his idealistic and poetic attitude coupled with high intelligence. Maybe people should learn more from the latter, rather than the former :-)
Excuse me, but of course it is. To believe that it’s neither possible nor simple is to believe that human minds are so needlessly complex, viewed from the outside, that bottling up our model-forming and evaluation-forming processes for artificial processing is impossible or intractable.
The problem only looks hard because people are applying the wrong maps. They come at it with maps based on pure logic, microeconomic utility theory, and a priori moral philosophizing, when they really need the maps of statistical learning theory, computational cognitive science, and evaluative psychology.
Sometimes when things look really improbable, it’s because you’ve got a very biased prior.