Rationalists often presume that it is possible to do much better than average by applying a small amount of optimization power. This is true in many domains, but can get you in trouble in certain places (see: the valley of bad rationality).
Rationalists often fail to compartmentalize, even when it would be highly useful.
Rationalists are often overconfident (see: SSC calibration questions) but believe they are well calibrated (bias blind spot, also just knowing about a bias is not enough to unbias you)
Rationalists don’t even lift bro.
Rationalists often fail to take marginal utility arguments to their logical conclusion, which is why they spend their time on things they are already good at rather than power leveling their lagging skills (see above). (Actually, I think we might be wired for this in order to seek comparative advantage in tribal roles.)
Rationalists often presume that others are being stupidly irrational when really the other people just have significantly different values and/or operate largely in domains where there aren’t strong reinforcement mechanisms for systematic thought or are stuck in a local maximum in an area where crossing a chasm is very costly.
Rationalists are often overconfident (see: SSC calibration questions) but believe they are well calibrated (bias blind spot, also just knowing about a bias is not enough to unbias you)
If you’re referring to the calibration questions on the 2014 LW survey, rationalists were pretty well calibrated on them (though a bit overconfident). I described some analyses of the data here and here, and here’s a picture:
(where the amount of overconfidence is shown by how far the blue dots are below the black line)
I don’t know of any data on whether rationalists believe they are well calibrated on these sorts of questions—I suspect that a fair number of people would guess that they are overconfident.
I’ll also note here that I’m planning to do some analyses of the calibration questions on the 2016 LW Diaspora Survey during the next month. I think that there are issues with some of the questions that were on the survey, so before I do any analyses I’ll note that my preferred analyses will only include 4 of the questions:
Which is heavier, a virus or a prion? What year was the fast food chain “Dairy Queen” founded? (Within five years) Without counting, how many keys on a standard IBM keyboard released after 1986, within ten? What’s the diameter of a standard soccerball, in cm within 2?
For thoroughness I will also do some secondary analyses which include 7 questions, those 4 plus the following 3 (even though I think that these 3 questions have some issues which make them less good as tests of calibration):
I’m thinking of a number between one and ten, what is it? Alexander Hamilton appears on how many distinct denominations of US Currency? How many calories in a reese’s peanut butter cup within 20?
I’ve lurked around a bit and akrasia seems to be a consistent problem—I’d imagine that requires mental effort.
But on topic I doubt lifting weights doesn’t require mental effort. You still need to choose a menu, choose your lifting program, consistently make sure you’re doing things right. In fact, if common failure mods of dieting are usually caused by not enough mental energy put into proper planning.
And I’d give a special mention to the discipline required to follow on your meal plan.
Those things definitely take mental effort.
TLDR: What’s the ‘mental effort’ you’re talking about? Running calculations on $bitrate=(brainsize)* all day long?
“Requires non-mental effort” does NOT imply that no mental effort is required.
The quip points out that nerds (and most local rationalists are nerds) are perfectly fine with spending a lot of mental energy on things of interest, but are generally loath to engage in intense exercise and/or tolerate physical discomfort and pain.
Rationalists often presume that it is possible to do much better than average by applying a small amount of optimization power. This is true in many domains, but can get you in trouble in certain places (see: the valley of bad rationality).
Rationalists often fail to compartmentalize, even when it would be highly useful.
Rationalists are often overconfident (see: SSC calibration questions) but believe they are well calibrated (bias blind spot, also just knowing about a bias is not enough to unbias you)
Rationalists don’t even lift bro.
Rationalists often fail to take marginal utility arguments to their logical conclusion, which is why they spend their time on things they are already good at rather than power leveling their lagging skills (see above). (Actually, I think we might be wired for this in order to seek comparative advantage in tribal roles.)
Rationalists often presume that others are being stupidly irrational when really the other people just have significantly different values and/or operate largely in domains where there aren’t strong reinforcement mechanisms for systematic thought or are stuck in a local maximum in an area where crossing a chasm is very costly.
Ratiionalists stil have too much trust in scientific studies, especially psychological studies.
Many do.
http://thefutureprimaeval.net/why-we-even-lift/
If you’re referring to the calibration questions on the 2014 LW survey, rationalists were pretty well calibrated on them (though a bit overconfident). I described some analyses of the data here and here, and here’s a picture:
(where the amount of overconfidence is shown by how far the blue dots are below the black line)
I don’t know of any data on whether rationalists believe they are well calibrated on these sorts of questions—I suspect that a fair number of people would guess that they are overconfident.
I’ll also note here that I’m planning to do some analyses of the calibration questions on the 2016 LW Diaspora Survey during the next month. I think that there are issues with some of the questions that were on the survey, so before I do any analyses I’ll note that my preferred analyses will only include 4 of the questions:
Which is heavier, a virus or a prion?
What year was the fast food chain “Dairy Queen” founded? (Within five years)
Without counting, how many keys on a standard IBM keyboard released after 1986, within ten?
What’s the diameter of a standard soccerball, in cm within 2?
For thoroughness I will also do some secondary analyses which include 7 questions, those 4 plus the following 3 (even though I think that these 3 questions have some issues which make them less good as tests of calibration):
I’m thinking of a number between one and ten, what is it?
Alexander Hamilton appears on how many distinct denominations of US Currency?
How many calories in a reese’s peanut butter cup within 20?
Why not?
they do. http://thefutureprimaeval.net/why-we-even-lift/
Requires (non-mental) effort.
I’ve lurked around a bit and akrasia seems to be a consistent problem—I’d imagine that requires mental effort.
But on topic I doubt lifting weights doesn’t require mental effort. You still need to choose a menu, choose your lifting program, consistently make sure you’re doing things right. In fact, if common failure mods of dieting are usually caused by not enough mental energy put into proper planning.
And I’d give a special mention to the discipline required to follow on your meal plan.
Those things definitely take mental effort.
TLDR: What’s the ‘mental effort’ you’re talking about? Running calculations on $bitrate=(brainsize)* all day long?
formula not researched!
“Requires non-mental effort” does NOT imply that no mental effort is required.
The quip points out that nerds (and most local rationalists are nerds) are perfectly fine with spending a lot of mental energy on things of interest, but are generally loath to engage in intense exercise and/or tolerate physical discomfort and pain.