I’m going to post multiple comments here because I have several separate thoughts about these issues and I want them to be voted on separately so I can get a better idea of people’s thoughts on this matter. My comments on this post will be posted as comments to this comment—that way, people can also vote on the concept of posting multiple thoughts as separate comments.
Another problem is that there isn’t really any standard “rationality test” or other ability to actually determine how rational someone is, though some limited steps have been taken in that direction. Stanovich is working on one, but it can’t be expected for 3+ years at this stage.
This obviously limits the extent to which we can determine whether rationalists “actually win” (my impression, incidentally, is that they do but that there are a lot of skills that help more than current “rationality training” for the average person), what forms of rationality practice yield the most benefits, and so on.
When it comes to raising the sanity waterline, I can’t help but think that the intelligence issue is likely to be a paper tiger. In fact I think LessWrong as a whole cares far too much about unusually intelligent people and that this is one of the biggest flaws of the community as a general-interest project. However, I also recognize that multiple purposes are at work here and such goal conflict may be inevitable.
Can you elaborate on this? I think intelligence is a really important component of rationality in practice (although by “unusually intelligent” you might mean a higher number of standard deviations above the mean than I do).
Sure. Most rationality “in the wild” appears to be tacit rationality and building good habits, and I don’t think that intelligence is particularly important for that. I would definitely predict, for instance, that rationality training could be accessible to people with IQs 0-1 standard deviations above the mean.
I agree that this kind of rationality exists, but I think it tends to be domain-specific and suffer from transfer issues, and I’m also skeptical that it’s easily teachable.
I agree on all points, but I don’t see strong evidence for an easily teachable form of general rationality either, regardless of how intelligent the audience may be.
One other issue is that most people who have currently worked on developing rationality are themselves very intelligent. This sounds like it wouldn’t particularly be a problem—but as Eliezer wrote in My Way:
“If there are parts of my rationality that are visibly male, then there are probably other parts—perhaps harder to identify—that are tightly bound to growing up with Orthodox Jewish parents, or (cough) certain other unusual features of my life.”
Intelligence definitely strikes me as one of those unusual features.
Perhaps it could be said that current rationality practices, designed by the highly intelligent and largely practiced by the same, require high intelligence, but it nevertheless seems far from clear that all rationality practices require high intelligence.
First off, one potential problem is the term “rationality” itself. MIRI found that the term “singularity” was too corrupted by other associations to be useful, so they changed their name to avoid being associated with this. I believe that “rational” may be similarly corrupted (“logical” certainly is) and finding another term altogether might be a good tactic.
That does not include the “map corresponding to territory” idea, which is very important for us. Also, it has its now negative connotation. Like “rational” has Spock, “effective” has all kinds of effective villains. At least the Spock seems harmless.
I think having two different words for epistemic and instrumental rationality would be a feature, not a bug. There’s already plenty of overlap between the two (knowing truths is useful, and can easily be subsumed in a discussion of instrumental rationality), but since they do come into conflict sometimes, it would be very valuable to have a concise way to specify which kind of rationality we’re talking about. This would also make our replacing ‘rationality’ with some other term have a function beyond euphemism treadmilling, which makes it easier to justify to the anti-PR crowd.
But I agree “effective” kind of falls flat. Is there an adjective/noun set derivable from “wins” that doesn’t make us sound like Charlie Sheen? (It can be a protologism.)
Something derived from “success”? If you don’t mind sounding like a self-help guru. “Achievement” if you don’t mind sounding like a primary school teacher. “Optimisation” is pretty accurate but I guess only really works for AI programmers or mathematicians who already have a technical understanding of it.
Also, it has its now negative connotation. Like “rational” has Spock, “effective” has all kinds of effective villains.
Huh. I don’t get that connotation at all. OTOH, this is possibly due to me not being a native speaker or consuming unusually little mainstream mass media.
I think the idea of posting multiple comments is good, as long as none of the comments is even a little bit a prerequisite for the others. I personally don’t think it’s worth voting on the idea. (Just try it out for a while and see whether you like it and whether you get any complaints.) I suggest posting the separate comments at the base level so they’ll be in their proper karmic order as independent posts; otherwise you lose most of the value of this approach, and you’ll be testing a different idea than the one you intend to ultimately implement.
I’m going to post multiple comments here because I have several separate thoughts about these issues and I want them to be voted on separately so I can get a better idea of people’s thoughts on this matter. My comments on this post will be posted as comments to this comment—that way, people can also vote on the concept of posting multiple thoughts as separate comments.
Another problem is that there isn’t really any standard “rationality test” or other ability to actually determine how rational someone is, though some limited steps have been taken in that direction. Stanovich is working on one, but it can’t be expected for 3+ years at this stage.
This obviously limits the extent to which we can determine whether rationalists “actually win” (my impression, incidentally, is that they do but that there are a lot of skills that help more than current “rationality training” for the average person), what forms of rationality practice yield the most benefits, and so on.
When it comes to raising the sanity waterline, I can’t help but think that the intelligence issue is likely to be a paper tiger. In fact I think LessWrong as a whole cares far too much about unusually intelligent people and that this is one of the biggest flaws of the community as a general-interest project. However, I also recognize that multiple purposes are at work here and such goal conflict may be inevitable.
Can you elaborate on this? I think intelligence is a really important component of rationality in practice (although by “unusually intelligent” you might mean a higher number of standard deviations above the mean than I do).
Sure. Most rationality “in the wild” appears to be tacit rationality and building good habits, and I don’t think that intelligence is particularly important for that. I would definitely predict, for instance, that rationality training could be accessible to people with IQs 0-1 standard deviations above the mean.
I agree that this kind of rationality exists, but I think it tends to be domain-specific and suffer from transfer issues, and I’m also skeptical that it’s easily teachable.
I agree on all points, but I don’t see strong evidence for an easily teachable form of general rationality either, regardless of how intelligent the audience may be.
One other issue is that most people who have currently worked on developing rationality are themselves very intelligent. This sounds like it wouldn’t particularly be a problem—but as Eliezer wrote in My Way:
Intelligence definitely strikes me as one of those unusual features.
Perhaps it could be said that current rationality practices, designed by the highly intelligent and largely practiced by the same, require high intelligence, but it nevertheless seems far from clear that all rationality practices require high intelligence.
Fair point.
First off, one potential problem is the term “rationality” itself. MIRI found that the term “singularity” was too corrupted by other associations to be useful, so they changed their name to avoid being associated with this. I believe that “rational” may be similarly corrupted (“logical” certainly is) and finding another term altogether might be a good tactic.
I think “rational” is probably fine. “Rationalist” may not be, but that’s more thanks to having the connotations of an *ism than because of its stem.
Agreed. What about “effective”?
That does not include the “map corresponding to territory” idea, which is very important for us. Also, it has its now negative connotation. Like “rational” has Spock, “effective” has all kinds of effective villains. At least the Spock seems harmless.
I think having two different words for epistemic and instrumental rationality would be a feature, not a bug. There’s already plenty of overlap between the two (knowing truths is useful, and can easily be subsumed in a discussion of instrumental rationality), but since they do come into conflict sometimes, it would be very valuable to have a concise way to specify which kind of rationality we’re talking about. This would also make our replacing ‘rationality’ with some other term have a function beyond euphemism treadmilling, which makes it easier to justify to the anti-PR crowd.
But I agree “effective” kind of falls flat. Is there an adjective/noun set derivable from “wins” that doesn’t make us sound like Charlie Sheen? (It can be a protologism.)
Something derived from “success”? If you don’t mind sounding like a self-help guru. “Achievement” if you don’t mind sounding like a primary school teacher. “Optimisation” is pretty accurate but I guess only really works for AI programmers or mathematicians who already have a technical understanding of it.
Huh. I don’t get that connotation at all. OTOH, this is possibly due to me not being a native speaker or consuming unusually little mainstream mass media.
I think the idea of posting multiple comments is good, as long as none of the comments is even a little bit a prerequisite for the others. I personally don’t think it’s worth voting on the idea. (Just try it out for a while and see whether you like it and whether you get any complaints.) I suggest posting the separate comments at the base level so they’ll be in their proper karmic order as independent posts; otherwise you lose most of the value of this approach, and you’ll be testing a different idea than the one you intend to ultimately implement.