So, on ways of smoothing the incentive gradient for high-quality reasoning:
This is a reason to have a “rationalist community.” Humans are satisficers. We won’t really care about the opinion of literally all 7 billion people on Earth if we have the approval of our own tribe. If our tribe has some norms about how conversation and thinking work, then we’ll be pretty able to follow those norms, so long as we expect that our needs are meetable within the tribe—that is, that it’s a good place to find friends, mates, careers, etc.
It’s also a reason to think about how UX affects discourse. I’m by no means an expert in this, but for instance, what does karma reward? what types of expression get attention? How can we offer rewards for behaviors we like?
That only helps if your “rationalist community” in fact pushes you to more accurate reasoning. Merely giving your community that name is far from sufficient however, and in my experience the “rationalist community” is mostly that in name only.
This seems too uncharitable (I mean, “mostly” is kind of ambiguous in this context so it might be true, but...). I have plenty of complaints, and certainly things could be much better, but I think the rationalists in fact reward accuracy / high-quality reasoning much more than the surrounding community of bay area engineers, which itself rewards accuracy much more than US elite culture, which itself rewards accuracy much more than US culture more broadly.
For example, we do in fact put an unusual amount of stock on correct logical argument, sound probabilistic reasoning, and scientific inquiry, which do in fact tend to produce more accurate conclusions.
“charitable” seems an odd name for the tendency to assume that you and your friends are better than other people, because well it just sure seems that way to you and your friends. You don’t have an accuracy track record of this group to refer to, right?
What kind of track record do you expect, and what other people are you comparing to? For example, are there academic communities for which you would grant the existence of such a track record, outside of the experimental sciences? For those communities, how would you respond to a comment like yours?
For example, I think that economists also have a set of norms for arriving at truer conclusions about society, but they also don’t have an easy-to-point-to track record of success as a community.
If you think economists count, then the bay area rationalists will count simply by virtue of arriving at a set of views that mirror mainstream economic views much more closely than does the average US elite consensus. But realistically, I don’t think that you can make the kind of case you are looking for for economists, and if you can then it will involve weakening the standards in a way that lets us make the same case for rationalists.
If you can’t name any communities that have such a track record, then this seems like a weak test of whether a community’s efforts to promote accurate conclusions are in name only. (Not necessarily a worthless one, but at least one that should be regarded with skepticism.)
I do think that e.g. bay area rationalists have substantially more accurate views about the topics they talk about then the world at large (on the future, AI, economics, politics, aid, cognitive science, etc.). This is largely driven by observing the rationalist views, using what I consider the best epistemic norms available, and finding the rationalist views to better accord with the output of that process. Make of that what you will.
Bay area rationalists appear to make better investments than average (dominated by very profitable bets on bitcoin, but also bets on AI/tech and a reliance on indices / skepticism of market returns), to work in higher paying jobs, to have views that more closely track traditionally recognized experts (which I expect to be more accurate than the median elite view), to make much more extensive quantitative predictions and in cases where comparisons are possible to have better predictive track records than pundits (though this is probably just due to being numerate, an issue that makes it basically impossible to compare quantitative track records to conventional elites).
In most cases, the rationalists’ high intelligence and prevalence of mental dysfunction are going to have a larger effect on their thinking than the community’s norms, so I don’t think that pointing to a strong track record here is even going to be persuasive to you here—you will just (correctly) dismiss it by saying “but we need to compare the rationalists to other people who are similarly smart...”—unless we manage to find a control group that has similar levels of intelligence. And if we do find people with similar levels of intelligence then they will quite plausibly be doing better than rationalists on lots of conventional measures, and I will (correctly) dismiss this by saying “but we need to compare the rationalists to other people who have similar levels of other abilities...”
In general, I feel you should engage more with quantitative detail about the difficulty of establishing the kind of track record that would be persuasive. I have a similar complaint regarding fire-the-CEO markets or other scaled-up field experiments. It looks to me like it is going to take forever to make a compelling case if you are relying on track record rather than the theory (unless people are willing to trust short-term market movements, which (a) they mostly aren’t, and (b) in that case it’s nearly a tautology that fire-the-CEO markets work, and the empirical data is just showing you that nothing surprising goes wrong). Yes, you can take the line that someone else should publish a criticism along these lines, but if you actually want to get the idea to get adopted it falls to you to do at least a basic power analysis.
Similarly, you can take the line that the rationalists should be in the business of figuring out exactly what kinds of track record would be persuasive to someone with your perspective. But if you actually want to affect the rationalists’ behavior, you would probably need to make some argument that the rationalists could stand to benefit by attempting to establish the kind of track record you are interested in, or that they should infer much from the non-existence of such a record, or something like that.
I said I haven’t seen this community as exceptionally accurate, and you say that you have seen that, and called my view “uncharitable”. I then mentioned a track record as a way to remind us that we lack the sort of particularly clear evidence that we agree would be persuasive. I didn’t mean that to be a criticism that you or others have not worked hard enough to create such a track record. Surely you can understand why outsiders might find suspect your standard of saying you think your community is more accurate because they more often agree with your beliefs.
That only helps if your “rationalist community” in fact pushes you to more accurate reasoning… in my experience the “rationalist community” is mostly that in name only.
I find this claim unsettling, since the rationalist community aggressively promotes an unusual set of epistemic norms (e.g. lots of reliance on logic and numeracy, on careful scrutiny of sources and claims, a trade in debunking explanations) which appear to me to be unusually good at producing true beliefs. You presumably have experience with these norms (e.g. you read stuff Eliezer writes, you sometimes talk to at least me and presumably other rationalists, you are sometimes at rationalist parties), and seem to be rejecting the claim that these norms are actually truth-promoting.
I certainly agree that we don’t have the kind of evidence that could decisively settle the question to an outsider, and I think skepticism is reasonable. The main reason someone would be optimistic about the rationalists is by actually looking at and reasoning about rationalist discourse. You seem to have done this though, so I read your comment as a strong suggestion that this reasoning is not very weighty given the absence of a track record that might provide more decisive evidence.
Even if you use truth-promoting norms, their effect can be weak enough that other effects overwhelm this effect. The “rationalist community” is different in a great many ways from other communities of thought.
the rationalist community aggressively promotes an unusual set of epistemic norms
Unusual..? How unusual do you think these epistemic norms would be to someone from hard sciences? Or even to, say, a civil engineer?
You keep on setting a low bar. It’s really not that hard to be better than the average.
appear to me to be unusually good at producing true beliefs
True beliefs are at best an intermediate, instrumental goal. What you need to do is be good at producing desirable outcomes in reality, not inside your own head.
One problem with threads of this form is that I feel inclined to respond even when I don’t expect it to be useful. It would be nice to cultivate norms that allow us to wind these things down somewhat more quickly+gracefully; I think this would improve my willingness to comment here and on the EA forum.
I would like to make a response like “I have objections to this comment, but I don’t think that continuing this conversation in this medium is likely to be the best use of our time” and for you to have the option of responding “I probably have objections to your objections” and for us to leave it at that, letting readers to infer what they will and to continue the discussion if they want to.
I think the problem with saying nothing is that it feels (probably irrationally) like accepting the last word, which is somewhat unpleasant if you have objections you’d like to express.
I think the problem with just making a dismissive comment like this is that it reads more aggressively than I would like it to read; it also reads like an implicit claim that I have the social position or credibility to justify such dismissiveness. But it’s just trying to be a judgment about what disagreements are useful.
For now I might try making the somewhat dismissive response with a link to this discussion:
In such situations I usually offer to agree to disagree. That’s not a put-down, but a clear signal that I don’t think the conversation is going anywhere. It also offers the other side an opportunity for parting words.
And if the other party doesn’t take the hint, you can just shrug, tap, and bail.
That’s a rather long reply to an observation that you don’t have any data to back up your claims.
If you are saying you’re better, you should explain what do you mean by “better”, compared to whom, and which data supports this conclusion. If you don’t have data, why should anyone take this claim seriously?
Robin’s comment irked me and I indulged the impulse to write a response, which resulted in rambling and was probably a mistake.
Also, neither Sarah’s nor my comment were mostly asserting “we are better,” and the interesting content of Robin’s comment was not “we don’t have data to back up that claim.” (See my response to Robin.)
When you appeal to theory that is not conventionally robust, I think the key distinction is between asking intuition and looking for priors. This seems to be the same disagreement as about potential for philosophical progress: intuition may claim something, but does it have the expertise, does it connect with the territory? If in an a priori framing (as in outside view, or antiprediction) something seems unlikely, and intuition shouldn’t be expected to know much better, why trust it? Intuition is not the a priori, it merely should be when the mind has no useful data.
(The question where intuition is hard to trust is about real world rationalists, not ideal rationalists. In principle rationality training is useful, but the difficult question is whether it’s significant compared to selecting people for the same style of thinking.)
So, on ways of smoothing the incentive gradient for high-quality reasoning:
This is a reason to have a “rationalist community.” Humans are satisficers. We won’t really care about the opinion of literally all 7 billion people on Earth if we have the approval of our own tribe. If our tribe has some norms about how conversation and thinking work, then we’ll be pretty able to follow those norms, so long as we expect that our needs are meetable within the tribe—that is, that it’s a good place to find friends, mates, careers, etc.
It’s also a reason to think about how UX affects discourse. I’m by no means an expert in this, but for instance, what does karma reward? what types of expression get attention? How can we offer rewards for behaviors we like?
That only helps if your “rationalist community” in fact pushes you to more accurate reasoning. Merely giving your community that name is far from sufficient however, and in my experience the “rationalist community” is mostly that in name only.
This seems too uncharitable (I mean, “mostly” is kind of ambiguous in this context so it might be true, but...). I have plenty of complaints, and certainly things could be much better, but I think the rationalists in fact reward accuracy / high-quality reasoning much more than the surrounding community of bay area engineers, which itself rewards accuracy much more than US elite culture, which itself rewards accuracy much more than US culture more broadly.
For example, we do in fact put an unusual amount of stock on correct logical argument, sound probabilistic reasoning, and scientific inquiry, which do in fact tend to produce more accurate conclusions.
“charitable” seems an odd name for the tendency to assume that you and your friends are better than other people, because well it just sure seems that way to you and your friends. You don’t have an accuracy track record of this group to refer to, right?
What kind of track record do you expect, and what other people are you comparing to? For example, are there academic communities for which you would grant the existence of such a track record, outside of the experimental sciences? For those communities, how would you respond to a comment like yours?
For example, I think that economists also have a set of norms for arriving at truer conclusions about society, but they also don’t have an easy-to-point-to track record of success as a community.
If you think economists count, then the bay area rationalists will count simply by virtue of arriving at a set of views that mirror mainstream economic views much more closely than does the average US elite consensus. But realistically, I don’t think that you can make the kind of case you are looking for for economists, and if you can then it will involve weakening the standards in a way that lets us make the same case for rationalists.
If you can’t name any communities that have such a track record, then this seems like a weak test of whether a community’s efforts to promote accurate conclusions are in name only. (Not necessarily a worthless one, but at least one that should be regarded with skepticism.)
I do think that e.g. bay area rationalists have substantially more accurate views about the topics they talk about then the world at large (on the future, AI, economics, politics, aid, cognitive science, etc.). This is largely driven by observing the rationalist views, using what I consider the best epistemic norms available, and finding the rationalist views to better accord with the output of that process. Make of that what you will.
Bay area rationalists appear to make better investments than average (dominated by very profitable bets on bitcoin, but also bets on AI/tech and a reliance on indices / skepticism of market returns), to work in higher paying jobs, to have views that more closely track traditionally recognized experts (which I expect to be more accurate than the median elite view), to make much more extensive quantitative predictions and in cases where comparisons are possible to have better predictive track records than pundits (though this is probably just due to being numerate, an issue that makes it basically impossible to compare quantitative track records to conventional elites).
In most cases, the rationalists’ high intelligence and prevalence of mental dysfunction are going to have a larger effect on their thinking than the community’s norms, so I don’t think that pointing to a strong track record here is even going to be persuasive to you here—you will just (correctly) dismiss it by saying “but we need to compare the rationalists to other people who are similarly smart...”—unless we manage to find a control group that has similar levels of intelligence. And if we do find people with similar levels of intelligence then they will quite plausibly be doing better than rationalists on lots of conventional measures, and I will (correctly) dismiss this by saying “but we need to compare the rationalists to other people who have similar levels of other abilities...”
In general, I feel you should engage more with quantitative detail about the difficulty of establishing the kind of track record that would be persuasive. I have a similar complaint regarding fire-the-CEO markets or other scaled-up field experiments. It looks to me like it is going to take forever to make a compelling case if you are relying on track record rather than the theory (unless people are willing to trust short-term market movements, which (a) they mostly aren’t, and (b) in that case it’s nearly a tautology that fire-the-CEO markets work, and the empirical data is just showing you that nothing surprising goes wrong). Yes, you can take the line that someone else should publish a criticism along these lines, but if you actually want to get the idea to get adopted it falls to you to do at least a basic power analysis.
Similarly, you can take the line that the rationalists should be in the business of figuring out exactly what kinds of track record would be persuasive to someone with your perspective. But if you actually want to affect the rationalists’ behavior, you would probably need to make some argument that the rationalists could stand to benefit by attempting to establish the kind of track record you are interested in, or that they should infer much from the non-existence of such a record, or something like that.
I said I haven’t seen this community as exceptionally accurate, and you say that you have seen that, and called my view “uncharitable”. I then mentioned a track record as a way to remind us that we lack the sort of particularly clear evidence that we agree would be persuasive. I didn’t mean that to be a criticism that you or others have not worked hard enough to create such a track record. Surely you can understand why outsiders might find suspect your standard of saying you think your community is more accurate because they more often agree with your beliefs.
You said:
I find this claim unsettling, since the rationalist community aggressively promotes an unusual set of epistemic norms (e.g. lots of reliance on logic and numeracy, on careful scrutiny of sources and claims, a trade in debunking explanations) which appear to me to be unusually good at producing true beliefs. You presumably have experience with these norms (e.g. you read stuff Eliezer writes, you sometimes talk to at least me and presumably other rationalists, you are sometimes at rationalist parties), and seem to be rejecting the claim that these norms are actually truth-promoting.
I certainly agree that we don’t have the kind of evidence that could decisively settle the question to an outsider, and I think skepticism is reasonable. The main reason someone would be optimistic about the rationalists is by actually looking at and reasoning about rationalist discourse. You seem to have done this though, so I read your comment as a strong suggestion that this reasoning is not very weighty given the absence of a track record that might provide more decisive evidence.
Even if you use truth-promoting norms, their effect can be weak enough that other effects overwhelm this effect. The “rationalist community” is different in a great many ways from other communities of thought.
Unusual..? How unusual do you think these epistemic norms would be to someone from hard sciences? Or even to, say, a civil engineer?
You keep on setting a low bar. It’s really not that hard to be better than the average.
True beliefs are at best an intermediate, instrumental goal. What you need to do is be good at producing desirable outcomes in reality, not inside your own head.
One problem with threads of this form is that I feel inclined to respond even when I don’t expect it to be useful. It would be nice to cultivate norms that allow us to wind these things down somewhat more quickly+gracefully; I think this would improve my willingness to comment here and on the EA forum.
I would like to make a response like “I have objections to this comment, but I don’t think that continuing this conversation in this medium is likely to be the best use of our time” and for you to have the option of responding “I probably have objections to your objections” and for us to leave it at that, letting readers to infer what they will and to continue the discussion if they want to.
I think the problem with saying nothing is that it feels (probably irrationally) like accepting the last word, which is somewhat unpleasant if you have objections you’d like to express.
I think the problem with just making a dismissive comment like this is that it reads more aggressively than I would like it to read; it also reads like an implicit claim that I have the social position or credibility to justify such dismissiveness. But it’s just trying to be a judgment about what disagreements are useful.
For now I might try making the somewhat dismissive response with a link to this discussion:
I have objections to this comment, but I don’t think that continuing this conversation in this medium is likely to be the best use of our time
I am interested in whether people think this is a good policy, or something else would work better.
In such situations I usually offer to agree to disagree. That’s not a put-down, but a clear signal that I don’t think the conversation is going anywhere. It also offers the other side an opportunity for parting words.
And if the other party doesn’t take the hint, you can just shrug, tap, and bail.
That’s a rather long reply to an observation that you don’t have any data to back up your claims.
If you are saying you’re better, you should explain what do you mean by “better”, compared to whom, and which data supports this conclusion. If you don’t have data, why should anyone take this claim seriously?
Robin’s comment irked me and I indulged the impulse to write a response, which resulted in rambling and was probably a mistake.
Also, neither Sarah’s nor my comment were mostly asserting “we are better,” and the interesting content of Robin’s comment was not “we don’t have data to back up that claim.” (See my response to Robin.)
When you appeal to theory that is not conventionally robust, I think the key distinction is between asking intuition and looking for priors. This seems to be the same disagreement as about potential for philosophical progress: intuition may claim something, but does it have the expertise, does it connect with the territory? If in an a priori framing (as in outside view, or antiprediction) something seems unlikely, and intuition shouldn’t be expected to know much better, why trust it? Intuition is not the a priori, it merely should be when the mind has no useful data.
(The question where intuition is hard to trust is about real world rationalists, not ideal rationalists. In principle rationality training is useful, but the difficult question is whether it’s significant compared to selecting people for the same style of thinking.)
I agree. The hypothetical reason-promoting community need not be the one that already exists and is called “the rationalist community.”