When I was trying to solve the koan I focused on a few interrelated subproblems of skill one. It seems like this sort of thinking is particularly useful for reminding yourself to consider the outside view and/or the difference between confidence levels inside and outside an argument. Also, I think the koan left out something pretty important. Under what circumstances, if any, is it harmful to consciously think of the distinction between the map and the territory—to visualize your thought bubble containing a belief, and a reality outside it, rather than just using your map to think about reality directly? How exactly does it hurt, on what sort of problem?
.
.
.
.
.
It looks pretty solid for describing unbounded epistemic rationality. It’s slightly iffier from a bounded instrumental perspective in that it probably imposes some mental cost to apply it and their are many circumstances were its not noticably helpful. There’s also the matter of political situations and similar were its -arguably- good to be generally overconfident.
Under what circumstances, if any, is it harmful to consciously think of the distinction between the map and the territory
If you can ever gain by being ignorant, you can gain more by better knowledge still.
Cf. E.T. Jaynes: “It appears to be a quite general principle that, whenever there is a randomized way of doing something, then there is a nonrandomized way that delivers better performance but requires more thought”, quoted here.
How exactly does it hurt, on what sort of problem?
Beliefs are part of reality too. The image “thought bubble containing a belief, and a reality outside it” is a good map, but it’s not itself the territory.
In particular, the mantra “Reality is that which, when we stop believing in it, doesn’t go away” can be harmful in areas such as psychology and sociology, and in domains which have a large component of these, such as finance, politics or software engineering. In these domains you must account for phenomena such as self-fulfilling or self-cancelling prophecies. Concrete example: stock market crashes.
So you’re saying if stop believing in stock market crashes, they go away?
I think what you mean is that if you intervened to change everyone’s beliefs away from “oh shit, sell!”, then stock market crashes would not happen. That is a different matter than talking about just my or your belief.
So you’re saying if stop believing in stock market crashes, they go away?
More often it works the other way around: the fact that someone stops believing in an overinflated stock market (i.e. claims a “bubble” is about to burst) acts as a self-fulfilling prophecy, causing others to also stop believing which -if this information cascade propagates enough- will cause a crash, therefore bringing reality in line with the original belief.
But information cascades can also cause booms, as I understand it more likely of individual stocks.
The “someone” above is underspecified: it can be one particularly influential person—Nate Silver recounts how Amazon stock surged 25% after Henry Blodget hyped it up in 1998. But it can also be a larger group, who, looking at small fluctuations in the market, panic and start a stampede.
My point is that “thought bubbles” in general are part of reality. Your believing in things has causal influence on reality (another concrete example: romantic relationships—the concept “love”, which can be cashed out in terms of blood levels of various hormones, is one of those things that go away because people stop believing in it). It is generally bad epistemic practice to overstate this influence, but it can also be bad to understate it.
When I was trying to solve the koan I focused on a few interrelated subproblems of skill one. It seems like this sort of thinking is particularly useful for reminding yourself to consider the outside view and/or the difference between confidence levels inside and outside an argument.
Also, I think the koan left out something pretty important.
Under what circumstances, if any, is it harmful to consciously think of the distinction between the map and the territory—to visualize your thought bubble containing a belief, and a reality outside it, rather than just using your map to think about reality directly? How exactly does it hurt, on what sort of problem?
.
.
.
.
.
It looks pretty solid for describing unbounded epistemic rationality. It’s slightly iffier from a bounded instrumental perspective in that it probably imposes some mental cost to apply it and their are many circumstances were its not noticably helpful. There’s also the matter of political situations and similar were its -arguably- good to be generally overconfident.
If you can ever gain by being ignorant, you can gain more by better knowledge still.
Cf. E.T. Jaynes: “It appears to be a quite general principle that, whenever there is a randomized way of doing something, then there is a nonrandomized way that delivers better performance but requires more thought”, quoted here.
Beliefs are part of reality too. The image “thought bubble containing a belief, and a reality outside it” is a good map, but it’s not itself the territory.
In particular, the mantra “Reality is that which, when we stop believing in it, doesn’t go away” can be harmful in areas such as psychology and sociology, and in domains which have a large component of these, such as finance, politics or software engineering. In these domains you must account for phenomena such as self-fulfilling or self-cancelling prophecies. Concrete example: stock market crashes.
So you’re saying if stop believing in stock market crashes, they go away?
I think what you mean is that if you intervened to change everyone’s beliefs away from “oh shit, sell!”, then stock market crashes would not happen. That is a different matter than talking about just my or your belief.
More often it works the other way around: the fact that someone stops believing in an overinflated stock market (i.e. claims a “bubble” is about to burst) acts as a self-fulfilling prophecy, causing others to also stop believing which -if this information cascade propagates enough- will cause a crash, therefore bringing reality in line with the original belief.
But information cascades can also cause booms, as I understand it more likely of individual stocks.
The “someone” above is underspecified: it can be one particularly influential person—Nate Silver recounts how Amazon stock surged 25% after Henry Blodget hyped it up in 1998. But it can also be a larger group, who, looking at small fluctuations in the market, panic and start a stampede.
My point is that “thought bubbles” in general are part of reality. Your believing in things has causal influence on reality (another concrete example: romantic relationships—the concept “love”, which can be cashed out in terms of blood levels of various hormones, is one of those things that go away because people stop believing in it). It is generally bad epistemic practice to overstate this influence, but it can also be bad to understate it.
Agreed.
My point was that your examples were a part of reality in a way that the ideal belief-of-observer used in the “reality is that which...” mantra isn’t.
No. It may be good to talk shit like you’re overconfident. Actually being overconfident is just unnecesarily shooting yourself in the foot.