I’m puzzled, as usual, but perhaps more so: this post has helped clarify the lack of clarity in my understanding of “bias” as the word is used here. You see, I don’t in general see an a priori distinction between “bias” and other kinds of heuristics; we are talking about computational shortcuts, ways to save on reasoning, and all of them go wrong sometimes. I’m glad to have scope insensitivity pointed out to me, and having seen the discussion on this blog may even keep me from some error at some time, but my reasoning will always be incomplete, my models will always fall short of being “isomorphic with reality.” (A modeler’s joke, along the lines of using a territory as its own map.) I am tempted to think of “bias” as a label for the systematic errors introduced by any given heuristic, such as valuation by prototype; is this fair?
If so, “overcoming bias” is one of those journey-rather-than-destination unattainables that at any given moment faces me with an economic sort of choice: I can spend more resources trying to overcome the bias introduced by each heuristic I use, i.e. in meta-inference, or I can spend more resources actually carrying out inferences by those heuristics—expanding and pruning nodes in my actual current search-graph, so to speak, and coming to conclusions even though I know that some of them will turn out to have been wrong.
Is this an obviously bad way to think? Probably so—because it leans me in Tyler’s direction. Well, I dunno. This comment is an extremely imperfect representation of what I want to say...but I post it, on the grounds that I have to go listen to Dougie MacLean, a Scottish singer and songwriter, and achieving a more perfect (but never actually perfect) representation of what I want to say has a finite value, which at the margin somewhere loses out to getting on with the next thing. I thought that was true for everybody.
I’m puzzled, as usual, but perhaps more so: this post has helped clarify the lack of clarity in my understanding of “bias” as the word is used here. You see, I don’t in general see an a priori distinction between “bias” and other kinds of heuristics; we are talking about computational shortcuts, ways to save on reasoning, and all of them go wrong sometimes. I’m glad to have scope insensitivity pointed out to me, and having seen the discussion on this blog may even keep me from some error at some time, but my reasoning will always be incomplete, my models will always fall short of being “isomorphic with reality.” (A modeler’s joke, along the lines of using a territory as its own map.) I am tempted to think of “bias” as a label for the systematic errors introduced by any given heuristic, such as valuation by prototype; is this fair?
If so, “overcoming bias” is one of those journey-rather-than-destination unattainables that at any given moment faces me with an economic sort of choice: I can spend more resources trying to overcome the bias introduced by each heuristic I use, i.e. in meta-inference, or I can spend more resources actually carrying out inferences by those heuristics—expanding and pruning nodes in my actual current search-graph, so to speak, and coming to conclusions even though I know that some of them will turn out to have been wrong.
Is this an obviously bad way to think? Probably so—because it leans me in Tyler’s direction. Well, I dunno. This comment is an extremely imperfect representation of what I want to say...but I post it, on the grounds that I have to go listen to Dougie MacLean, a Scottish singer and songwriter, and achieving a more perfect (but never actually perfect) representation of what I want to say has a finite value, which at the margin somewhere loses out to getting on with the next thing. I thought that was true for everybody.