One heuristic I learned is to adopt the opinion on a topic of one expert in the field, but find out the consensus position of experts on that topic. Another is to take meta-analyses more seriously than individual scientific publications. These are both good heuristics, but the heuristic I learned to learn them was just to follow around people who collected good heuristics for matching the map the to the territory. This is the rationality community. There are pieces of advice for scientific literacy which fall out of common sense, and which skeptics and science communicators tell the public, like not taking how the news report the results of a scientific study at face value. But I haven’t completed a university degree. If I hadn’t found the rationality community, I’d never know “initially anchor on expert consensus” or “look up meta-analyses over individual studies” were good heuristics for matching the map to the territory.
And this gets me thinking it’s possible much of the rationality community is just people picking up heuristics for matching the map to the territory in a endless game of follow the follower. Our approach to improving our own epistemologies is to act like a school of fish. For all we know we could be a bunch of sophists who could never expect to independently recreate the reasoning which develops good heuristics for matching the map to the territory. It’s certainly not *all* or maybe even most community members, but it certainly could be a lot of us. I fear I’d be in that group.
Of course I wouldn’t discourage people from using the heuristics even if they don’t fully understand them. If rationality is systematized winning, and we’re a community, we’re going to share systems for winning with each other. Some of us will be able to design systems from scratch, or have mastered the use of existing ones. This is instrumentally rational. But for those of us who feel like we’re constantly borrowing others systems without understanding them, individually developing our own epistemic rationality might be necessary for instrumental rationality and goal achievement later. If we don’t always have a community of masters and designers willing to share their systems for winning to depend on, eventually we’ll need to figure out how to systematically win from scratch. How do we do that?
I’ve been struggling a bit with a reply. I have a suspicion we’re not quite talking about the same thing, have different underlying assumptions, or maybe you’re just generalizing.
I consider what I describe here to be of pretty limited ‘practical’ value (where by ‘practical’ I mean having a benefit not directly based on feelings). I care about knowing whether the minimum wage is a good policy, to pick one example of the kind of question I had in mind with this post, but pretty much only for intellectual curiosity, and the same is true for most similar questions. For me, there’s only really one entry in this category where having more accurate views has significant practical implications, and that’s the question about AI risk. Here the implications are massive, but I don’t think that one’s difficult to get right.
If we go one level higher, to what I’ll call abstract epistemology for the sake of this post – general mental skills for being more rational – I’d agree that those have much more practical implications, because you can probably optimize wasteful behaviors in your own life. That’s where I’m on board with talking about systematic winning; but this post really isn’t that, this is specifically about whether, if public person X says something about the minimum wage, that should change your view on the topic or not.
So you’re raising an interesting point but it seems like it goes beyond what I’ve been talking about, right? Which isn’t bad, I’m just trying to get clarity.
Similarly, it seems odd to me to describe LW as primarily, or to any significant degree, being about looking for heuristics to match m&t. Like, as I was saying in the post, it seems to me like there is barely any talk about that, people are either more abstract (-> sequences) or less abstract (-> particular theoretical arguments, mostly about AI or about charity; or practical advice about instrumental rationality).
Going by that, one obvious explanation for why there isn’t talk about matching m&t in this way is the lack of practical value, but, given how much people seem to care, I don’t buy that.
I’m not sure if the question you asked at the end was how to come up with the kind of cues on my list, but I’ll describe how I arrived at #8, which should be a fine example. It seems pretty clear to me that the institution of academia is highly flawed, to the point that people can have successful careers while mostly saying things which, to a rational person, are obviously false. Experts disagree on basic questions, the process is inefficient, there is tons of wasteful signaling, and the world should look different if academia was working really well. Failure to recognize that seems to be a fairly reliable signal of incompetence, because it’s something that’s not often talked about and not mainstream so you have to realize it yourself, and the most likely explanation for why you don’t is that you’re not significantly more competent than most people who constitute academia. My own very limited experience of working for people doing their PhDs confirms this. So I don’t have a better reply than essentially picking up random valuable observations such as the above, which is what this list consists of.
If the question was how to have accurate views without such a system, I think the two heuristics you mentioned are solid (though, how do you go about figuring out what the scientific consensus on something is?). I’d also say, look at polls among really smart people. Like this and the SSC surveys. And insofar as they are applicable, prediction markets. But I wouldn’t label either as systematic winning. On that front, I think the sequences are the most powerful tool, along with the books Inadequate Equilibria and The Elephant in the Brain. Those don’t need to be re-invented.
What I was getting at was coming up with systems of who to take seriously to match the map to the territory isn’t a replacement for knowing or learning how to do so independently. It was a tangent. I should’ve pointed that out. As you pointed out, it doesn’t cause serious problems except in uncommon cases like working on AI alignment.
I consider what I describe here to be of pretty limited ‘practical’ value (where by ‘practical’ I mean having a benefit not directly based on feelings). I care about knowing whether the minimum wage is a good policy, to pick one example of the kind of question I had in mind with this post, but pretty much only for intellectual curiosity, and the same is true for most similar questions.
This feels like an indicator that we need to get more specific.
There are a few distinct things I can match to, “Picking an opinion on a topic”.
There’s the Social Instruction Manual Version. You want to be able to do the symbolic thing of Having a Conversation About Minimum Wage. Which side do you root for, what do you boo and yay, what sorts of words should you say to which people, etc.
There’s adopting specific predictions with X about of certainty. This would be listening to someone talk about minimum wage and going, “Okay, I now expect that if a minimum wage was put in place in Examplestan, XYZ consequences would happen with probability P.” This could be “practical” (maybe you need to make decisions based off of this prediction), but it doesn’t have to be.
Then there’s adopting an attitude. I see an attitude as a large set of under-specified rules for making predictions about the subject of the attitude. (quote taken from the first page of googling “why should we raise the minimum wage?”)
To be sure, increasing the minimum wage alone won’t solve the broader problems of wage stagnation and income inequality. We need to make greater investments in job training and strengthen labor protections, among other policies. But a higher minimum wage can provide an important lift to the 2.2 million Americans currently earning minimum wage and help tens of millions of other workers who earn a few dollars more than $7.25.
This doesn’t really tell me how to make predictions about minimum wage related issues, but it does point me in a direction.
Roughly, I see Social Instruction Manuals as not super important, specific predictions as useful based on how much I’m interested in the topic, and attitudes as often to risky to adopt without a lot more thought. It seems important to make clear which one I’m after, because I’d use different rules for picking people to adopt different kinds of “opinions” from.
Which of these did you have in mind when writing this post? Or were you thinking of something different form my three options?
One heuristic I learned is to adopt the opinion on a topic of one expert in the field, but find out the consensus position of experts on that topic. Another is to take meta-analyses more seriously than individual scientific publications. These are both good heuristics, but the heuristic I learned to learn them was just to follow around people who collected good heuristics for matching the map the to the territory. This is the rationality community. There are pieces of advice for scientific literacy which fall out of common sense, and which skeptics and science communicators tell the public, like not taking how the news report the results of a scientific study at face value. But I haven’t completed a university degree. If I hadn’t found the rationality community, I’d never know “initially anchor on expert consensus” or “look up meta-analyses over individual studies” were good heuristics for matching the map to the territory.
And this gets me thinking it’s possible much of the rationality community is just people picking up heuristics for matching the map to the territory in a endless game of follow the follower. Our approach to improving our own epistemologies is to act like a school of fish. For all we know we could be a bunch of sophists who could never expect to independently recreate the reasoning which develops good heuristics for matching the map to the territory. It’s certainly not *all* or maybe even most community members, but it certainly could be a lot of us. I fear I’d be in that group.
Of course I wouldn’t discourage people from using the heuristics even if they don’t fully understand them. If rationality is systematized winning, and we’re a community, we’re going to share systems for winning with each other. Some of us will be able to design systems from scratch, or have mastered the use of existing ones. This is instrumentally rational. But for those of us who feel like we’re constantly borrowing others systems without understanding them, individually developing our own epistemic rationality might be necessary for instrumental rationality and goal achievement later. If we don’t always have a community of masters and designers willing to share their systems for winning to depend on, eventually we’ll need to figure out how to systematically win from scratch. How do we do that?
I’ve been struggling a bit with a reply. I have a suspicion we’re not quite talking about the same thing, have different underlying assumptions, or maybe you’re just generalizing.
I consider what I describe here to be of pretty limited ‘practical’ value (where by ‘practical’ I mean having a benefit not directly based on feelings). I care about knowing whether the minimum wage is a good policy, to pick one example of the kind of question I had in mind with this post, but pretty much only for intellectual curiosity, and the same is true for most similar questions. For me, there’s only really one entry in this category where having more accurate views has significant practical implications, and that’s the question about AI risk. Here the implications are massive, but I don’t think that one’s difficult to get right.
If we go one level higher, to what I’ll call abstract epistemology for the sake of this post – general mental skills for being more rational – I’d agree that those have much more practical implications, because you can probably optimize wasteful behaviors in your own life. That’s where I’m on board with talking about systematic winning; but this post really isn’t that, this is specifically about whether, if public person X says something about the minimum wage, that should change your view on the topic or not.
So you’re raising an interesting point but it seems like it goes beyond what I’ve been talking about, right? Which isn’t bad, I’m just trying to get clarity.
Similarly, it seems odd to me to describe LW as primarily, or to any significant degree, being about looking for heuristics to match m&t. Like, as I was saying in the post, it seems to me like there is barely any talk about that, people are either more abstract (-> sequences) or less abstract (-> particular theoretical arguments, mostly about AI or about charity; or practical advice about instrumental rationality).
Going by that, one obvious explanation for why there isn’t talk about matching m&t in this way is the lack of practical value, but, given how much people seem to care, I don’t buy that.
I’m not sure if the question you asked at the end was how to come up with the kind of cues on my list, but I’ll describe how I arrived at #8, which should be a fine example. It seems pretty clear to me that the institution of academia is highly flawed, to the point that people can have successful careers while mostly saying things which, to a rational person, are obviously false. Experts disagree on basic questions, the process is inefficient, there is tons of wasteful signaling, and the world should look different if academia was working really well. Failure to recognize that seems to be a fairly reliable signal of incompetence, because it’s something that’s not often talked about and not mainstream so you have to realize it yourself, and the most likely explanation for why you don’t is that you’re not significantly more competent than most people who constitute academia. My own very limited experience of working for people doing their PhDs confirms this. So I don’t have a better reply than essentially picking up random valuable observations such as the above, which is what this list consists of.
If the question was how to have accurate views without such a system, I think the two heuristics you mentioned are solid (though, how do you go about figuring out what the scientific consensus on something is?). I’d also say, look at polls among really smart people. Like this and the SSC surveys. And insofar as they are applicable, prediction markets. But I wouldn’t label either as systematic winning. On that front, I think the sequences are the most powerful tool, along with the books Inadequate Equilibria and The Elephant in the Brain. Those don’t need to be re-invented.
What I was getting at was coming up with systems of who to take seriously to match the map to the territory isn’t a replacement for knowing or learning how to do so independently. It was a tangent. I should’ve pointed that out. As you pointed out, it doesn’t cause serious problems except in uncommon cases like working on AI alignment.
This feels like an indicator that we need to get more specific.
There are a few distinct things I can match to, “Picking an opinion on a topic”.
There’s the Social Instruction Manual Version. You want to be able to do the symbolic thing of Having a Conversation About Minimum Wage. Which side do you root for, what do you boo and yay, what sorts of words should you say to which people, etc.
There’s adopting specific predictions with X about of certainty. This would be listening to someone talk about minimum wage and going, “Okay, I now expect that if a minimum wage was put in place in Examplestan, XYZ consequences would happen with probability P.” This could be “practical” (maybe you need to make decisions based off of this prediction), but it doesn’t have to be.
Then there’s adopting an attitude. I see an attitude as a large set of under-specified rules for making predictions about the subject of the attitude. (quote taken from the first page of googling “why should we raise the minimum wage?”)
This doesn’t really tell me how to make predictions about minimum wage related issues, but it does point me in a direction.
Roughly, I see Social Instruction Manuals as not super important, specific predictions as useful based on how much I’m interested in the topic, and attitudes as often to risky to adopt without a lot more thought. It seems important to make clear which one I’m after, because I’d use different rules for picking people to adopt different kinds of “opinions” from.
Which of these did you have in mind when writing this post? Or were you thinking of something different form my three options?