In a sense, it’s very straightforward. Better predictive models of the world improve instrumental rationality in all domains. The effect also compounded, because the realization helped me understand how most people think, which further improved my predictive models of the world.
Yes, I agree that if it’s a substantially better predictive model it would be very useful. Do you have any concrete examples of, say, beliefs you came to using many many weak arguments that you would not have come to using a single strong argument, which later turned out to be true? Or ways in which you started modeling people differently and how specifically those improved models were useful? (My prior for “one thinks one is improving instrumental rationality ⇒ one is actually improving instrumental rationality” is rather low.)
Retrospectively, I see that many of my past predictive errors came from repeatedly dismissing weak arguments against a claim by using a relatively strong argument as a point of comparison, without realizing that I was doing it repeatedly, and ignoring an accumulation of evidence against the claim, just because no single piece of evidence against it appeared to be stronger than the strongest piece of evidence for it.
If you’d like to correspond by email, I’ll say a little more, though not so much as to compromise the identities of the people involved. You can reach me at jsinick@gmail.com
This is really interesting. Can you give some concrete examples from different aspects and explain a bit how your capital has increased?
In a sense, it’s very straightforward. Better predictive models of the world improve instrumental rationality in all domains. The effect also compounded, because the realization helped me understand how most people think, which further improved my predictive models of the world.
Yes, I agree that if it’s a substantially better predictive model it would be very useful. Do you have any concrete examples of, say, beliefs you came to using many many weak arguments that you would not have come to using a single strong argument, which later turned out to be true? Or ways in which you started modeling people differently and how specifically those improved models were useful? (My prior for “one thinks one is improving instrumental rationality ⇒ one is actually improving instrumental rationality” is rather low.)
The examples are quite personal in nature.
Retrospectively, I see that many of my past predictive errors came from repeatedly dismissing weak arguments against a claim by using a relatively strong argument as a point of comparison, without realizing that I was doing it repeatedly, and ignoring an accumulation of evidence against the claim, just because no single piece of evidence against it appeared to be stronger than the strongest piece of evidence for it.
If you’d like to correspond by email, I’ll say a little more, though not so much as to compromise the identities of the people involved. You can reach me at jsinick@gmail.com