If my beliefs, by so existing, change the outcome of reality
So, if your writing “here be dragons” on a map results in someone encountering a dragon when traveling to the mapped area (a very popular theme in SF/F), how useful is the concept of reality?
You don’t need to reject realism to reject the idea that beliefs can only be reflections of reality, rather than a causal part of it.
You don’t need to, no. But the concept of reality is less useful if all you mean by it is future inputs to test the accuracy of your beliefs, as opposed to a largely unchanged territory that you map. If you have to build the proverbial epicycles to keep your belief alive, you might want to consider a simpler model.
The map is part of the territory.
Is this a useful assertion? If so, how?
How accurately does your map represent your map?
Not sure what you mean by this. That beliefs can be nested? Sure. That the term “map” presumes some territory it maps? It sure does, in the realist picture. Hence my preference for the term “model” or even “belief”. Of course, a realist can ask something like “but what is your model a model of [in reality]?”, which to me is not a useful question if your reality is changing depending on the models.
So, if it is true in reality that your writing “here be dragons” on a map results in someone encountering a dragon when traveling to the mapped area...
It happens. Probably not with dragons, but with placebo and with many other things where the nonlinear second order effect map->reality is ignored in this simplified map/territory distinction. So why make the distinction?
(Sorry.) To answer your question: for the times when it doesn’t happen? I wasn’t actually planning to join a debate; you might find it more productive to ask one of the people who gave more in-depth replies.
So, if your writing “here be dragons” on a map results in someone encountering a dragon when traveling to the mapped area (a very popular theme in SF/F), how useful is the concept of reality?
The placebo effect came up elsewhere as an example where beliefs alter reality. Similarly, self-fulfilling prophecies need not rely on magic; if I believe I’ll fail at a task, I very probably alter, just by holding this belief, my odds of completing that task. The modified litany isn’t “All beliefs modify reality,” but “I should have accurate beliefs about what beliefs have repercussions in reality.” Your dragon example is merely a demonstration of a belief which is immaterial to reality, for the purposes at least of the subject of the belief.
I believe this response suffices in answering the rest of your objections, as well.
See http://lesswrong.com/lw/h69/litany_of_instrumentarski/8qht for an example of a pretty common theme in this post. Contrary to the argument presented in the comment [ETA: I misread the comment; this argument isn’t actually present. My apologies!], rationality doesn’t break down, a specific and faulty idea held by some rationalists breaks down.
So, if your writing “here be dragons” on a map results in someone encountering a dragon when traveling to the mapped area (a very popular theme in SF/F), how useful is the concept of reality?
You don’t need to, no. But the concept of reality is less useful if all you mean by it is future inputs to test the accuracy of your beliefs, as opposed to a largely unchanged territory that you map. If you have to build the proverbial epicycles to keep your belief alive, you might want to consider a simpler model.
Is this a useful assertion? If so, how?
Not sure what you mean by this. That beliefs can be nested? Sure. That the term “map” presumes some territory it maps? It sure does, in the realist picture. Hence my preference for the term “model” or even “belief”. Of course, a realist can ask something like “but what is your model a model of [in reality]?”, which to me is not a useful question if your reality is changing depending on the models.
At the risk of dogpiling:
It happens. Probably not with dragons, but with placebo and with many other things where the nonlinear second order effect map->reality is ignored in this simplified map/territory distinction. So why make the distinction?
(Sorry.) To answer your question: for the times when it doesn’t happen? I wasn’t actually planning to join a debate; you might find it more productive to ask one of the people who gave more in-depth replies.
Reality is a useful concept in all possible universes you might find yourself in!
Real things cause qualia, unreal things do not. No matter what you care about, this distinction will impact it.
May I ask what your definition of reality you are currently using?
The placebo effect came up elsewhere as an example where beliefs alter reality. Similarly, self-fulfilling prophecies need not rely on magic; if I believe I’ll fail at a task, I very probably alter, just by holding this belief, my odds of completing that task. The modified litany isn’t “All beliefs modify reality,” but “I should have accurate beliefs about what beliefs have repercussions in reality.” Your dragon example is merely a demonstration of a belief which is immaterial to reality, for the purposes at least of the subject of the belief.
I believe this response suffices in answering the rest of your objections, as well.
See http://lesswrong.com/lw/h69/litany_of_instrumentarski/8qht for an example of a pretty common theme in this post. Contrary to the argument presented in the comment [ETA: I misread the comment; this argument isn’t actually present. My apologies!], rationality doesn’t break down, a specific and faulty idea held by some rationalists breaks down.
He’s talking about brains.