It is certainly true that transhumanism, existential risks and the singularity lay one open to wishful thinking—because many people involved endorse and focus on certain ideas and worldviews because they want to be a certain kind of person (e.g. The kind of person who “saves the world” or “makes a difference”), rather than wanting some more concrete near-mode thing like money or sex or even a certain specific, near outcome.
However, one should note that reasoning as if motivated cognition is infinitely difficult to defeat is not neccessarily the way to win. Perhaps we could assume some kind of middle ground involving giving up some lofty goals as hopelessly hard to debias about but not others.
Well hopefully we do end up at a middle ground, but it doesn’t come for free just by wishing for it. One solution is to tie some incentives to reality, so that you gain when you are right and lose when you are wrong.
OK. What incentive mechanisms are likely to work? We want an incentive that’s sufficiently robust to be useful feedback with some actual emotional punch, but lightweight enough to use habitually. If you commit yourself to strong disincentives for being wrong, then I suspect you’ll be less likely to use the mechanism, or you’ll fail to calibrate on uncertainty. If the mechanism involves significant transaction costs, then you’ll never use it but for the most serious concerns, or those concerns in which you benefit by making a visible commitment.
One way is to keep a journal of thoughts and opinions and current positions, however mundane. This can be a fairly lightweight commitment, no more than a few minutes a day. If you keep it private, you can freely record things that you think might be true, without the overhead of public commitment to a proposition. And if you check back through it months or years later, you can gather feedback about when you thought clearly, and when your thinking was motivated. Especially in personal matters, a difference of a few months can yield sharp contrast to your perspective.
Of course, another possibility is to have open discussions with other aspiring rationalists. State and defend positions that you hold with higher uncertainty than the positions you would usually commit to publicly, with the understanding that those positions and the ensuing discussion need not leave your circle. Seek out hints of biases in each other’s positions. If your circle values clarity and precision, then clarity and precision will be socially rewarded and imprecision will be socially punished.
Admittedly, this idea has been covered before. I think, though, that the “rationality dojo” answers some of these issues.
Having open discussions with colleagues doesn’t directly create incentives tied to reality, unless those folks have such incentives. A journal might create such incentives if one took the trouble to score one’s previous statements against later revealed reality.
Agreed—a write-only journal does not help at all. The payoff comes from the sometimes-startling realization of the difference between one’s current and old views.
And yes, it almost seems easier for the “rationality dojo” to go wrong than right.
I’m new at this; let me expand what I think your point is. It’s easier to argue without referring to ground truth than it is to stay focused on reality. (See: almost any online argument.) The default incentive structure of an argument rewards winning the argument, not finding the truth. Thus, practicing argument will make the practitioner good at arguing, rather than finding truth. Since an incentive structure strongly shapes behavior even when we recognize the structure, the group will not learn rationality unless the group can make incentives to do so. So, the rationality dojo doesn’t solve the problem, it merely defers it to another level of organization.
On the other hand, the rationality dojo affords a wider set of incentives—we can, if careful, use group dynamics to shape the incentive structure. If successful there, we can use the desire to belong to a group to enhance those incentives. The dojo is a possibly useful ingredient in this incentive structure.
Agreed—a write-only journal does not help at all. The payoff comes from the sometimes-startling realization of the difference between one’s current and old views.
I read an account of someone reading their old journals, and the educational surprise was how little their views had changed. They were having what seemed like new revelations again and again, but they were the same revelations. I can’t remember what use the writer made of the overview.
One way to tie incentives to reality is simply to expect to survive until the relatively distant future and therefore have one’s plans pertaining to the future yeild results, good or bad. Being young and healthy helps this.
It is certainly true that transhumanism, existential risks and the singularity lay one open to wishful thinking—because many people involved endorse and focus on certain ideas and worldviews because they want to be a certain kind of person (e.g. The kind of person who “saves the world” or “makes a difference”), rather than wanting some more concrete near-mode thing like money or sex or even a certain specific, near outcome.
However, one should note that reasoning as if motivated cognition is infinitely difficult to defeat is not neccessarily the way to win. Perhaps we could assume some kind of middle ground involving giving up some lofty goals as hopelessly hard to debias about but not others.
Well hopefully we do end up at a middle ground, but it doesn’t come for free just by wishing for it. One solution is to tie some incentives to reality, so that you gain when you are right and lose when you are wrong.
For emphasis, again: The Proper Use of Humility.
OK. What incentive mechanisms are likely to work? We want an incentive that’s sufficiently robust to be useful feedback with some actual emotional punch, but lightweight enough to use habitually. If you commit yourself to strong disincentives for being wrong, then I suspect you’ll be less likely to use the mechanism, or you’ll fail to calibrate on uncertainty. If the mechanism involves significant transaction costs, then you’ll never use it but for the most serious concerns, or those concerns in which you benefit by making a visible commitment.
One way is to keep a journal of thoughts and opinions and current positions, however mundane. This can be a fairly lightweight commitment, no more than a few minutes a day. If you keep it private, you can freely record things that you think might be true, without the overhead of public commitment to a proposition. And if you check back through it months or years later, you can gather feedback about when you thought clearly, and when your thinking was motivated. Especially in personal matters, a difference of a few months can yield sharp contrast to your perspective.
Of course, another possibility is to have open discussions with other aspiring rationalists. State and defend positions that you hold with higher uncertainty than the positions you would usually commit to publicly, with the understanding that those positions and the ensuing discussion need not leave your circle. Seek out hints of biases in each other’s positions. If your circle values clarity and precision, then clarity and precision will be socially rewarded and imprecision will be socially punished. Admittedly, this idea has been covered before. I think, though, that the “rationality dojo” answers some of these issues.
Having open discussions with colleagues doesn’t directly create incentives tied to reality, unless those folks have such incentives. A journal might create such incentives if one took the trouble to score one’s previous statements against later revealed reality.
Agreed—a write-only journal does not help at all. The payoff comes from the sometimes-startling realization of the difference between one’s current and old views.
And yes, it almost seems easier for the “rationality dojo” to go wrong than right.
I’m new at this; let me expand what I think your point is. It’s easier to argue without referring to ground truth than it is to stay focused on reality. (See: almost any online argument.) The default incentive structure of an argument rewards winning the argument, not finding the truth. Thus, practicing argument will make the practitioner good at arguing, rather than finding truth. Since an incentive structure strongly shapes behavior even when we recognize the structure, the group will not learn rationality unless the group can make incentives to do so. So, the rationality dojo doesn’t solve the problem, it merely defers it to another level of organization.
On the other hand, the rationality dojo affords a wider set of incentives—we can, if careful, use group dynamics to shape the incentive structure. If successful there, we can use the desire to belong to a group to enhance those incentives. The dojo is a possibly useful ingredient in this incentive structure.
I read an account of someone reading their old journals, and the educational surprise was how little their views had changed. They were having what seemed like new revelations again and again, but they were the same revelations. I can’t remember what use the writer made of the overview.
If you’re going to keep harping on prediction markets, at least harp on them openly instead of insinuating.
One way to tie incentives to reality is simply to expect to survive until the relatively distant future and therefore have one’s plans pertaining to the future yeild results, good or bad. Being young and healthy helps this.
That works for topics on which longer lives create stronger incentives. Not clear how many topics that is.
Well, futurism and the singularity is clearly one of these topics, as we are talking about te events that will happen this century.