“But an Artificial Intelligence programmer who knows how to create a chess-playing program out of base matter, has taken a genuine step toward crossing the gap. If you understand concepts like consequentialism, backward chaining, utility functions, and search trees, you can make merely causal/mechanical systems compute plans.”
The space of algorithms to play chess “well” is large.
That space is not equivalent to the space of “intelligence.”
Your conjecture seems to be that the Problem of Chess requires intelligence.
I also don’t see how you can claim that understanding utility functions helps you understand the brain. Do you think that such functions are explicitly represented in the brain? Do you have ANY reason to believe this?
I guess it seems to me that you’re claiming that you have reason to believe you understand something about what intelligence is—but then you go on to talk about some crappy models we have for it.
“But an Artificial Intelligence programmer who knows how to create a chess-playing program out of base matter, has taken a genuine step toward crossing the gap. If you understand concepts like consequentialism, backward chaining, utility functions, and search trees, you can make merely causal/mechanical systems compute plans.”
The space of algorithms to play chess “well” is large. That space is not equivalent to the space of “intelligence.”
Your conjecture seems to be that the Problem of Chess requires intelligence.
I also don’t see how you can claim that understanding utility functions helps you understand the brain. Do you think that such functions are explicitly represented in the brain? Do you have ANY reason to believe this?
I guess it seems to me that you’re claiming that you have reason to believe you understand something about what intelligence is—but then you go on to talk about some crappy models we have for it.