Somehow unrelated, my question is about dissolution. What is the empirical evidence behind it? Could someone point me to it, preferably something short about brain structures?
Otherwise, it would seem to be subject too much to hindsight bias: you’ve seen people make a mistake, and you build a brain model that makes that mistake. But it could be another brain model, you just don’t know because your dissolution is unfalsifiable.
I’m not making a global claim that people who do this always—or even usually—do better than people who reason forwards (although I don’t see why the approaches need be mutually-exclusive). I do suspect that this is the case for many problems. Considering the meat of the approach—fixing a success condition clearly in your mind, it seems reasonable to do that whenever solving a problem. In fact, not doing so would be an obvious failure mode for all but the most trivial of problems, unless you want to solve problems by a somewhat random walk through solution-space.
To be clear, my evidence is anecdotal, as are my claims. For me personally, this anecdotal evidence is rather strong—I have indeed been more insightful in the last few months. In that time, my IQ, mental health, and other obvious confounds have not changed; I’m doing my best to isolate what has changed so it can be replicated and reused to the benefit of the community-at-large.
So if I can’t be sure, what’s the point—why share this? As I understand it, one of the driving forces behind the instrumental rationality project is that the scientific study of achievement-maximization and thinking clearly about really hard problems has been woefully under-prioritized. So I’m doing my part by sharing things I’m fairly sure explain changes for me and seeing if they generalize. I’d love for others to try this approach and report their results; both affirmative and negative results would be evidence for the question of whether this isjust an incorrect post facto explanation.
Hi,
Somehow unrelated, my question is about dissolution. What is the empirical evidence behind it? Could someone point me to it, preferably something short about brain structures?
Otherwise, it would seem to be subject too much to hindsight bias: you’ve seen people make a mistake, and you build a brain model that makes that mistake. But it could be another brain model, you just don’t know because your dissolution is unfalsifiable.
Thank you!
Thanks for the question!
I’m not making a global claim that people who do this always—or even usually—do better than people who reason forwards (although I don’t see why the approaches need be mutually-exclusive). I do suspect that this is the case for many problems. Considering the meat of the approach—fixing a success condition clearly in your mind, it seems reasonable to do that whenever solving a problem. In fact, not doing so would be an obvious failure mode for all but the most trivial of problems, unless you want to solve problems by a somewhat random walk through solution-space.
To be clear, my evidence is anecdotal, as are my claims. For me personally, this anecdotal evidence is rather strong—I have indeed been more insightful in the last few months. In that time, my IQ, mental health, and other obvious confounds have not changed; I’m doing my best to isolate what has changed so it can be replicated and reused to the benefit of the community-at-large.
So if I can’t be sure, what’s the point—why share this? As I understand it, one of the driving forces behind the instrumental rationality project is that the scientific study of achievement-maximization and thinking clearly about really hard problems has been woefully under-prioritized. So I’m doing my part by sharing things I’m fairly sure explain changes for me and seeing if they generalize. I’d love for others to try this approach and report their results; both affirmative and negative results would be evidence for the question of whether this is just an incorrect post facto explanation.