You’ve convinced me that I don’t have conscious introspective access to the algorithms I use for these things. This doesn’t mean that my brain isn’t doing something pretty structured and formal underneath.
The formalization example I think is a good one. There’s a famous book by George Polya, “how to solve it”. It’s effectively a list of mental tactics used in problem solving, especially mathematical problem solving.
When I sit down to solve a problem, like formalizing the natural numbers, I apply something like Polya’s tool-set iteratively. “Have I formalized something similar before?” “Is there a simpler version I could start with?” and so forth. This is partly conscious and partly not—but the fact that we don’t have introspective access to the unconscious mind doesn’t make it nonalgorithmic.
As I work, I periodically evaluate what I have. There’s a black box in my head for “do I like this?” I don’t know a lot about its internals, but that again isn’t evidence for it being non-algorithmic. It’s fairly deterministic. I have no reason to doubt that there’s a Turing machine that simulates it.
Effectively, my algorithm for math works like this:
while(nothing else is a higher priority than this problem) {
stare at the problem and try to understand it
search my past memories for something related // neural nets are good at this
for each relevant past memory,
try to apply a relevant technique that worked in the past
evaluate the result.
if it looks like progress
declare this to be the new version of the problem
Sorry, it seems you are just presuming computationalism. The question is not “Why would it not be algorithmic?” but “Why would it be algorithmic?”, considering that as you say for yourself, from your perspective no algorithm is visible.
The algorithm you wrote down is a nice metaphor, but not in any way an algorithm in the way computer science means it. Since we talk about AI, I am only refering to the use of algorithm in the sense of “precisely formalizable procedure” as in computer science.
I agree that neural nets in general, and the human brain in particular can’t be readily replaced with a well-structured computer program of moderate complexity.
But I indeed was presuming computationalism, in the sense that “all a human brain does is compute some function that could in principle be formalized, given enough information about the particular brain in question”. If that’s the claim you wanted to focus on, you should have raised it more directly.
Computationalism is quite separate from whether there is a simple formalism for intelligence. I believe computationalism because I believe that it would be possible to build an accurate neuron-level simulator of a brain. Such a simulator could be evaluated using any turing-equivalent computer. But the resulting function would be very messy and lack a simple hierarchical structure.
Which part of this are we disagreeing on? Do you think a neuron-level brain simulation could produce intelligent behavior similar in character to a human being? Do you think an engineered software artifact could ever do the same?
You’ve convinced me that I don’t have conscious introspective access to the algorithms I use for these things. This doesn’t mean that my brain isn’t doing something pretty structured and formal underneath.
The formalization example I think is a good one. There’s a famous book by George Polya, “how to solve it”. It’s effectively a list of mental tactics used in problem solving, especially mathematical problem solving.
When I sit down to solve a problem, like formalizing the natural numbers, I apply something like Polya’s tool-set iteratively. “Have I formalized something similar before?” “Is there a simpler version I could start with?” and so forth. This is partly conscious and partly not—but the fact that we don’t have introspective access to the unconscious mind doesn’t make it nonalgorithmic.
As I work, I periodically evaluate what I have. There’s a black box in my head for “do I like this?” I don’t know a lot about its internals, but that again isn’t evidence for it being non-algorithmic. It’s fairly deterministic. I have no reason to doubt that there’s a Turing machine that simulates it.
Effectively, my algorithm for math works like this:
}
Seems algorithmic to me!
Sorry, it seems you are just presuming computationalism. The question is not “Why would it not be algorithmic?” but “Why would it be algorithmic?”, considering that as you say for yourself, from your perspective no algorithm is visible.
The algorithm you wrote down is a nice metaphor, but not in any way an algorithm in the way computer science means it. Since we talk about AI, I am only refering to the use of algorithm in the sense of “precisely formalizable procedure” as in computer science.
I agree that neural nets in general, and the human brain in particular can’t be readily replaced with a well-structured computer program of moderate complexity.
But I indeed was presuming computationalism, in the sense that “all a human brain does is compute some function that could in principle be formalized, given enough information about the particular brain in question”. If that’s the claim you wanted to focus on, you should have raised it more directly.
Computationalism is quite separate from whether there is a simple formalism for intelligence. I believe computationalism because I believe that it would be possible to build an accurate neuron-level simulator of a brain. Such a simulator could be evaluated using any turing-equivalent computer. But the resulting function would be very messy and lack a simple hierarchical structure.
Which part of this are we disagreeing on? Do you think a neuron-level brain simulation could produce intelligent behavior similar in character to a human being? Do you think an engineered software artifact could ever do the same?