Just a notepad/stub as I review writings on filtered evidence:
One possible solution to the problem of the motivated arguer is to incentivize in favor of all arguments being motivated. Eliezer covered this in “What Evidence Filtered Evidence?” So a rationalist response to the problem of filtered evidence might be to set up a similar structure and protect it against tampering.
What would a rationalist do if they suspected a motivated arguer was calling a decision to their attention and trying to persuade them of option A? It might be to become a motivated arguer in the opposite direction, for option B. This matches what we see in psych studies. And this might be not a partisan reaction in favor of option B, but rather a rejection of a flawed decision-making process in which the motivated arguers are acting both as lawyer, jury, and judge. Understood in this framework, doubling-down when confronted with evidence to the contrary is a delaying tactic, a demand for fairness, not a cognitive bias.
Eliezer suggests that it’s only valid to believe in evolution if you’ve spent 5 minutes listening to creationists as well. But that only works if you’re trying to play the role of dispassionate judge. If instead you’re playing the role of motivated arguer, the question of forming true beliefs about the issue at hand is beside the point.
Setting up a fair trial, with a judge and jury whose authority and disinterest is acknowledged, is an incredibly fraught issue even in actual criminal trials, where we’ve had the most experience at it.
But if the whole world is going to get roped into being a motivated arguer for one side or the other, because everybody believes that their pet issue isn’t getting a fair trial, then there’s nobody left to be the judge or the jury.
What makes a judge a judge? What makes a jury a jury? In fact, these decompose into a set of several roles:
Trier of law
Trier of fact
Passer of sentence
Keeper of order
In interpreting arguments, to be a rationalist perhaps means to choose the role of judge or jury, rather than the role of lawyer.
In the “trier of law” role, the rationalist would ask whether the procedures for a fair trial are being followed. As “trier of fact,” the rationalist would be determining whether the evidence is valid and what it means. As “passer of sentence,” the rationalist would be making decisions based on it. As “keeper of order,” they are ensuring this process runs smoothly.
I think the piece we’re often missing is “passer of sentence.” It doesn’t feel like it means much if the only decision that will be influenced by your rationalism is your psychological state. Betting, or at least pre-registration with your reputation potentially at stake, seems to serve this role in the absence of any other consequential decision. Some people like to think, to write, but not to do very much with their thoughts or writings. I think a rationalist needs to strive to do something with their rationalism, or at least have somebody else do something, to be playing their part correctly.
Actually, there’s more to a trial that that:
What constitutes a crime?
What triggers an investigation?
What triggers an arrest and trial?
What sorts of policing should society enact? How should we allocate resources? How should we train? What should policy be for responding to problems?
As parallels for a rationalist, we’d have:
“How, in the abstract, do we define evidence and connect it with a hypothesis?”
“When do we start considering whether or not to enter into a formal ‘make a decision’ about something? I.e. to place a bet on our beliefs?”
“What should lead us to move from considering a bet, to committing to one?”
“What practical strategies should we have for what sorts of data streams to take in, how to coordinate around processing them, how to signal our roles to others, build trust with the community, and so on? How do we coordinate this whole ‘rationality’ institution?”
And underneath it all, a sense for what “normalcy” looks like, the absence of crime.
I kind of like this idea of a mapping → normalcy → patrol → report → investigation → trial → verdict → carrying out sentence analogy for rationalism. Instead it would be more like:
Mapping:
getting a sense of the things that matter in the world. for the police, it’s “where’s the towns? the people? where are crime rates high and low? what do we especially need to protect?”
this seems to be the stage where an idealization matters most. for example, if you decide that “future lives matter” then you base your mapping off the assumption that the institutions of society ought to protect future lives, even if it turns out to be that they normally don’t.
the police aren’t supposed to be activists or politicians. in the same way, i think it makes sense for rationalists to split their function between trying to bring normalcy in line with their idealized mapping, and improving their sense of what is normal. here we have the epistemics/instrumentality divide again. the police/politics analogy doesn’t make perfect sense here, except that the policing are trying to bring actual observational behavior in line with theoretically optimal epistemic practices.
Normalcy might look like:
the efficient market hypothesis
the world being full of noise, false and superficial chatter, deviations from the ideal, cynicism and stupidity
understanding how major institutions are supposed to function, and how they actually function
base rates for everything
parasitism, predation
Patrol:
once a sense of normalcy on X is established, you look for deviations from it—not just the difference between how they’re supposed to function vs. actually function, but how they normally actually function vs. deviations from that
perhaps “expected” is better than “normal” in many situations? “normal” assumes a static situation, while “expected” can fit both static and predictably changing situations.
Report:
conveying your observations of a deviation from normalcy to someone who cares (maybe yourself)
Investigation:
gathering evidence for whether or not your previous model of normalcy can encompass the deviation, or whether it needs to be updated/altered/no longer holds
Trial:
creating some system for getting strong arguments for and against the previous model
a dispassionate judge/jury
Verdict:
Some way of making a decision on which side wins, or declaring a mistrial
Carrying out the sentence:
Placing a bet or taking some other costly action, determined at least to some extent in advance, the way that sentencing guidelines do.
It’s tricky though.
If you want a big useful picture of the world, you can’t afford to investigate every institution from the ground up. If you want to be an effective operator, you need to join in a paradigm and help advance it, not try to build a whole entire worldview from scratch with no help. If you want to invent a better battery, you don’t personally re-invent physics first.
So maybe the police metaphor doesn’t work so well. In fact, we need to start with a goal. Then we work backwards to decide what kinds of models we need to understand in order to determine what actions to take in order to achieve that goal.
So we have a split.
Goal setting = ought
Epistemics = is
The way we “narrow down” epistemics is by limiting our research to fit our goals. We shouldn’t just jump straight to epistemics. We need a very clear sense of what our goals are, why, a strong reason. Then the epistemics follow.
I’ve had some marvelously clarifying experiences with deliberately setting goals. What makes a good goal?
Any goal has a state (success, working, failure), a justification (consequences, costs, why you?), and strategy/tactics for achieving it. Goals sometimes interlink, or can have sharp arbitrary constraints (i.e. given that I want to work in biomedical research, what’s the best way I can work on existential risk?).
You gather evidence that the state, justification, strategy/tactics are reasonable. The state is clear, the justification is sound, the strategy/tactics are in fact leading towards the success state. Try to do this with good epistemic hygiene
Doing things with no fundamental goal in mind I think leads to, well, never having had any purpose at all. What if my goal were to live in such a way that all my behaviors were goal-oriented?
Just a notepad/stub as I review writings on filtered evidence:
One possible solution to the problem of the motivated arguer is to incentivize in favor of all arguments being motivated. Eliezer covered this in “What Evidence Filtered Evidence?” So a rationalist response to the problem of filtered evidence might be to set up a similar structure and protect it against tampering.
What would a rationalist do if they suspected a motivated arguer was calling a decision to their attention and trying to persuade them of option A? It might be to become a motivated arguer in the opposite direction, for option B. This matches what we see in psych studies. And this might be not a partisan reaction in favor of option B, but rather a rejection of a flawed decision-making process in which the motivated arguers are acting both as lawyer, jury, and judge. Understood in this framework, doubling-down when confronted with evidence to the contrary is a delaying tactic, a demand for fairness, not a cognitive bias.
Eliezer suggests that it’s only valid to believe in evolution if you’ve spent 5 minutes listening to creationists as well. But that only works if you’re trying to play the role of dispassionate judge. If instead you’re playing the role of motivated arguer, the question of forming true beliefs about the issue at hand is beside the point.
Setting up a fair trial, with a judge and jury whose authority and disinterest is acknowledged, is an incredibly fraught issue even in actual criminal trials, where we’ve had the most experience at it.
But if the whole world is going to get roped into being a motivated arguer for one side or the other, because everybody believes that their pet issue isn’t getting a fair trial, then there’s nobody left to be the judge or the jury.
What makes a judge a judge? What makes a jury a jury? In fact, these decompose into a set of several roles:
Trier of law
Trier of fact
Passer of sentence
Keeper of order
In interpreting arguments, to be a rationalist perhaps means to choose the role of judge or jury, rather than the role of lawyer.
In the “trier of law” role, the rationalist would ask whether the procedures for a fair trial are being followed. As “trier of fact,” the rationalist would be determining whether the evidence is valid and what it means. As “passer of sentence,” the rationalist would be making decisions based on it. As “keeper of order,” they are ensuring this process runs smoothly.
I think the piece we’re often missing is “passer of sentence.” It doesn’t feel like it means much if the only decision that will be influenced by your rationalism is your psychological state. Betting, or at least pre-registration with your reputation potentially at stake, seems to serve this role in the absence of any other consequential decision. Some people like to think, to write, but not to do very much with their thoughts or writings. I think a rationalist needs to strive to do something with their rationalism, or at least have somebody else do something, to be playing their part correctly.
Actually, there’s more to a trial that that:
What constitutes a crime?
What triggers an investigation?
What triggers an arrest and trial?
What sorts of policing should society enact? How should we allocate resources? How should we train? What should policy be for responding to problems?
As parallels for a rationalist, we’d have:
“How, in the abstract, do we define evidence and connect it with a hypothesis?”
“When do we start considering whether or not to enter into a formal ‘make a decision’ about something? I.e. to place a bet on our beliefs?”
“What should lead us to move from considering a bet, to committing to one?”
“What practical strategies should we have for what sorts of data streams to take in, how to coordinate around processing them, how to signal our roles to others, build trust with the community, and so on? How do we coordinate this whole ‘rationality’ institution?”
And underneath it all, a sense for what “normalcy” looks like, the absence of crime.
I kind of like this idea of a mapping → normalcy → patrol → report → investigation → trial → verdict → carrying out sentence analogy for rationalism. Instead it would be more like:
Mapping:
getting a sense of the things that matter in the world. for the police, it’s “where’s the towns? the people? where are crime rates high and low? what do we especially need to protect?”
this seems to be the stage where an idealization matters most. for example, if you decide that “future lives matter” then you base your mapping off the assumption that the institutions of society ought to protect future lives, even if it turns out to be that they normally don’t.
the police aren’t supposed to be activists or politicians. in the same way, i think it makes sense for rationalists to split their function between trying to bring normalcy in line with their idealized mapping, and improving their sense of what is normal. here we have the epistemics/instrumentality divide again. the police/politics analogy doesn’t make perfect sense here, except that the policing are trying to bring actual observational behavior in line with theoretically optimal epistemic practices.
Normalcy might look like:
the efficient market hypothesis
the world being full of noise, false and superficial chatter, deviations from the ideal, cynicism and stupidity
understanding how major institutions are supposed to function, and how they actually function
base rates for everything
parasitism, predation
Patrol:
once a sense of normalcy on X is established, you look for deviations from it—not just the difference between how they’re supposed to function vs. actually function, but how they normally actually function vs. deviations from that
perhaps “expected” is better than “normal” in many situations? “normal” assumes a static situation, while “expected” can fit both static and predictably changing situations.
Report:
conveying your observations of a deviation from normalcy to someone who cares (maybe yourself)
Investigation:
gathering evidence for whether or not your previous model of normalcy can encompass the deviation, or whether it needs to be updated/altered/no longer holds
Trial:
creating some system for getting strong arguments for and against the previous model
a dispassionate judge/jury
Verdict:
Some way of making a decision on which side wins, or declaring a mistrial
Carrying out the sentence:
Placing a bet or taking some other costly action, determined at least to some extent in advance, the way that sentencing guidelines do.
It’s tricky though.
If you want a big useful picture of the world, you can’t afford to investigate every institution from the ground up. If you want to be an effective operator, you need to join in a paradigm and help advance it, not try to build a whole entire worldview from scratch with no help. If you want to invent a better battery, you don’t personally re-invent physics first.
So maybe the police metaphor doesn’t work so well. In fact, we need to start with a goal. Then we work backwards to decide what kinds of models we need to understand in order to determine what actions to take in order to achieve that goal.
So we have a split.
Goal setting = ought
Epistemics = is
The way we “narrow down” epistemics is by limiting our research to fit our goals. We shouldn’t just jump straight to epistemics. We need a very clear sense of what our goals are, why, a strong reason. Then the epistemics follow.
I’ve had some marvelously clarifying experiences with deliberately setting goals. What makes a good goal?
Any goal has a state (success, working, failure), a justification (consequences, costs, why you?), and strategy/tactics for achieving it. Goals sometimes interlink, or can have sharp arbitrary constraints (i.e. given that I want to work in biomedical research, what’s the best way I can work on existential risk?).
You gather evidence that the state, justification, strategy/tactics are reasonable. The state is clear, the justification is sound, the strategy/tactics are in fact leading towards the success state. Try to do this with good epistemic hygiene
Doing things with no fundamental goal in mind I think leads to, well, never having had any purpose at all. What if my goal were to live in such a way that all my behaviors were goal-oriented?