Maybe “type 2” is generally expensive, as opposed to specifically expensive for humans because humans happen to mostly suck at it.
I suggest, rather, that type two is universally cheaper if done efficiently and that we use type 1 preferentially because our “type 2” thought is essentially emulated in “type 1″ architecture.
Most of what I did when creating AI was finding more intelligent ways to program the type 2 system such that I could reduce the calls to the expensive type 1 evaluation system.
I would be interested in more details on that, because my experience very much matches what wnewman said: it’s more efficient to do as many things as reasonably possible in blind hardwired code than with search or explicit logical inference.
What sort of problems were you trying to solve, and what sort of architecture were you using that you found type 2 more efficient than type 1?
Perhaps I am merely extracting a different message from the “Type 1” and “Type 2“ distinction. When I look at the function of the human brain “Type 1” is orders of magnitude more powerful when implemented in mammal architecture than the pathetic excuse for “type 2” that we have. 7 bits and 200 hz? Seriously? How is Type 2 supposed to work with that? But it occurs to me that there are all sorts of arbitrary distinctions that can be drawn between ‘1’ and ‘2’ when generalized away from humans. My own interpretation is obviously going to be biased by different prior models.
I suggest, rather, that type two is universally cheaper if done efficiently and that we use type 1 preferentially because our “type 2” thought is essentially emulated in “type 1″ architecture.
Most of what I did when creating AI was finding more intelligent ways to program the type 2 system such that I could reduce the calls to the expensive type 1 evaluation system.
I would be interested in more details on that, because my experience very much matches what wnewman said: it’s more efficient to do as many things as reasonably possible in blind hardwired code than with search or explicit logical inference.
What sort of problems were you trying to solve, and what sort of architecture were you using that you found type 2 more efficient than type 1?
Perhaps I am merely extracting a different message from the “Type 1” and “Type 2“ distinction. When I look at the function of the human brain “Type 1” is orders of magnitude more powerful when implemented in mammal architecture than the pathetic excuse for “type 2” that we have. 7 bits and 200 hz? Seriously? How is Type 2 supposed to work with that? But it occurs to me that there are all sorts of arbitrary distinctions that can be drawn between ‘1’ and ‘2’ when generalized away from humans. My own interpretation is obviously going to be biased by different prior models.