Because Type 2 processing is expensive and can only work on one or at most a couple of things at a time, humans have evolved to default to Type 1 processing whenever possible.
We also got Type 1 from our animal heritage and evolution basically hacked Type 2 on for a few animals but mostly for us. We haven’t evolved a tendency to use Type 2 because we mostly suck at it. It also relies on reasoning from consciously known premises. Those who are most inclined to override Type 1 with Type 2 often get bitten in the arse by it because much of what most people believe is crap that just sounds good when signalling.
You write “We haven’t evolved a tendency to use Type 2 because we mostly suck at it.”
Maybe “type 2” is generally expensive, as opposed to specifically expensive for humans because humans happen to mostly suck at it. It seems pretty common in successful search-based solutions to AI problems (like planning, pathing, or adversarial games) to use something analogous to the “type 1“ vs. “type 2” split, moving a considerable amount of logic into “type 1”-like endpoint evaluation and/or heuristic hints for the search, then layering “type 2”-like search over that and treating the search as expensive. Even in problems that have been analyzed to death by hundreds or thousands of independent programmers (playing chess, e.g.) that design theme persists and enjoys competitive success. The competitive success of this theme across hundreds of independent designs doesn’t eliminate the possibility that this is just a reflection of a blind spot in human’s design ability, but it seems to me that that the success at least casts considerable doubt on the blind spot explanation. Perhaps we should take seriously the idea that the two layer theme and/or the relative expense of the two layers are properties of good solutions to this kind of problem, not merely idiosyncrasies of the human brain.
Maybe “type 2” is generally expensive, as opposed to specifically expensive for humans because humans happen to mostly suck at it.
I suggest, rather, that type two is universally cheaper if done efficiently and that we use type 1 preferentially because our “type 2” thought is essentially emulated in “type 1″ architecture.
Most of what I did when creating AI was finding more intelligent ways to program the type 2 system such that I could reduce the calls to the expensive type 1 evaluation system.
I would be interested in more details on that, because my experience very much matches what wnewman said: it’s more efficient to do as many things as reasonably possible in blind hardwired code than with search or explicit logical inference.
What sort of problems were you trying to solve, and what sort of architecture were you using that you found type 2 more efficient than type 1?
Perhaps I am merely extracting a different message from the “Type 1” and “Type 2“ distinction. When I look at the function of the human brain “Type 1” is orders of magnitude more powerful when implemented in mammal architecture than the pathetic excuse for “type 2” that we have. 7 bits and 200 hz? Seriously? How is Type 2 supposed to work with that? But it occurs to me that there are all sorts of arbitrary distinctions that can be drawn between ‘1’ and ‘2’ when generalized away from humans. My own interpretation is obviously going to be biased by different prior models.
We also got Type 1 from our animal heritage and evolution basically hacked Type 2 on for a few animals but mostly for us. We haven’t evolved a tendency to use Type 2 because we mostly suck at it. It also relies on reasoning from consciously known premises. Those who are most inclined to override Type 1 with Type 2 often get bitten in the arse by it because much of what most people believe is crap that just sounds good when signalling.
You write “We haven’t evolved a tendency to use Type 2 because we mostly suck at it.”
Maybe “type 2” is generally expensive, as opposed to specifically expensive for humans because humans happen to mostly suck at it. It seems pretty common in successful search-based solutions to AI problems (like planning, pathing, or adversarial games) to use something analogous to the “type 1“ vs. “type 2” split, moving a considerable amount of logic into “type 1”-like endpoint evaluation and/or heuristic hints for the search, then layering “type 2”-like search over that and treating the search as expensive. Even in problems that have been analyzed to death by hundreds or thousands of independent programmers (playing chess, e.g.) that design theme persists and enjoys competitive success. The competitive success of this theme across hundreds of independent designs doesn’t eliminate the possibility that this is just a reflection of a blind spot in human’s design ability, but it seems to me that that the success at least casts considerable doubt on the blind spot explanation. Perhaps we should take seriously the idea that the two layer theme and/or the relative expense of the two layers are properties of good solutions to this kind of problem, not merely idiosyncrasies of the human brain.
I suggest, rather, that type two is universally cheaper if done efficiently and that we use type 1 preferentially because our “type 2” thought is essentially emulated in “type 1″ architecture.
Most of what I did when creating AI was finding more intelligent ways to program the type 2 system such that I could reduce the calls to the expensive type 1 evaluation system.
I would be interested in more details on that, because my experience very much matches what wnewman said: it’s more efficient to do as many things as reasonably possible in blind hardwired code than with search or explicit logical inference.
What sort of problems were you trying to solve, and what sort of architecture were you using that you found type 2 more efficient than type 1?
Perhaps I am merely extracting a different message from the “Type 1” and “Type 2“ distinction. When I look at the function of the human brain “Type 1” is orders of magnitude more powerful when implemented in mammal architecture than the pathetic excuse for “type 2” that we have. 7 bits and 200 hz? Seriously? How is Type 2 supposed to work with that? But it occurs to me that there are all sorts of arbitrary distinctions that can be drawn between ‘1’ and ‘2’ when generalized away from humans. My own interpretation is obviously going to be biased by different prior models.