It is true that there might not be all that much insight needed to get to AGI on top of the insight needed to build a chimpanzee. The problem that Deutsch is neglecting is that we have no idea about how to build a chimpanzee.
The point is that, almost paradoxically, computers so far have been good at doing tasks that are difficult for humans and impossible for chimps (difficult mathematical computations, chess, Jeopardy, etc.), yet they can’t do well at tasks which are trivial for chimps or even for dogs.
Which is actually not all that striking a revelation when you consider that when humans find something difficult, it is because it is a task or a procedure we were not built to do. It makes sense then that programs designed to do the hard things would not do them the way humans do. It’s the mind projection fallacy to assume that easy tasks are in fact easy, and hard tasks hard.
Which is actually not all that striking a revelation when you consider that when humans find something difficult, it is because it is a task or a procedure we were not built to do.
But this doesn’t explain why we can build and program computers to do it much better than we do.
We are actually quite good at inventing algorithmic procedures to solve problems that we find difficult, but we suck at executing them, while we excel at doing things we can’t easily describe as algorithmic procedures. In fact, inventing algorithmic procedures is perhaps the hardest task to describe algorithmically, due to various theoretical results from computability and complexity theory and some empirical evidence.
Some people consider this fact as evidence that the human mind is fundamentally non-computational. I don’t share this view, mainly for physical reasons, but I think it might have a grain of truth: While the human mind is probably computational in the sense that in principle (and maybe one day in practice) we could run low-level brain simulations on a computer, its architecture differs from the architecture of typical computer hardware and software in a non-trivial, and probably still poorly understood way. Even the most advanced modern machine learning algorithms are at best only crude approximations of what’s going on inside the human brain.
Maybe this means that we are missing some insight, some non-trivial property that sets apart intelligent processes than other typical computations, or maybe there is just no feasible way of obtaining human-level, or even chimp-level, intelligence without a neuromorphic architecture, a substantially low-level emulation of a brain.
Anyway, I think that this counterintuitive observation is probably the source of the over-optimistic predictions about AI: people, even experts, consistently underestimate how difficult apparently easy cognitive really are.
But this doesn’t explain why we can build and program computers to do it much better than we do.
It absolutely does. These are things that humans are not designed to do. (Things humans are designed to do: recognizing familiar faces, isolating a single voice in a crowded restaurant, navigating from point A to point B by means of transport, walking, language, etc.) Imagine hammering in a nail with a screwdriver. You could do it.. but not very well. When we design machines to solve problems we find difficult, we create solutions that don’t exist in the structure of our brain. It would be natural to expect that artificial minds would be better suited to solve some problems than us.
Other than that, I’m not sure what you’re arguing, since that “counterintuitive observation” is exactly what I was saying.
It is true that there might not be all that much insight needed to get to AGI on top of the insight needed to build a chimpanzee. The problem that Deutsch is neglecting is that we have no idea about how to build a chimpanzee.
Oh I see what you mean. Well, I certainly agree with that!
The point is that, almost paradoxically, computers so far have been good at doing tasks that are difficult for humans and impossible for chimps (difficult mathematical computations, chess, Jeopardy, etc.), yet they can’t do well at tasks which are trivial for chimps or even for dogs.
Which is actually not all that striking a revelation when you consider that when humans find something difficult, it is because it is a task or a procedure we were not built to do. It makes sense then that programs designed to do the hard things would not do them the way humans do. It’s the mind projection fallacy to assume that easy tasks are in fact easy, and hard tasks hard.
But this doesn’t explain why we can build and program computers to do it much better than we do.
We are actually quite good at inventing algorithmic procedures to solve problems that we find difficult, but we suck at executing them, while we excel at doing things we can’t easily describe as algorithmic procedures.
In fact, inventing algorithmic procedures is perhaps the hardest task to describe algorithmically, due to various theoretical results from computability and complexity theory and some empirical evidence.
Some people consider this fact as evidence that the human mind is fundamentally non-computational. I don’t share this view, mainly for physical reasons, but I think it might have a grain of truth:
While the human mind is probably computational in the sense that in principle (and maybe one day in practice) we could run low-level brain simulations on a computer, its architecture differs from the architecture of typical computer hardware and software in a non-trivial, and probably still poorly understood way. Even the most advanced modern machine learning algorithms are at best only crude approximations of what’s going on inside the human brain.
Maybe this means that we are missing some insight, some non-trivial property that sets apart intelligent processes than other typical computations, or maybe there is just no feasible way of obtaining human-level, or even chimp-level, intelligence without a neuromorphic architecture, a substantially low-level emulation of a brain.
Anyway, I think that this counterintuitive observation is probably the source of the over-optimistic predictions about AI: people, even experts, consistently underestimate how difficult apparently easy cognitive really are.
It absolutely does. These are things that humans are not designed to do. (Things humans are designed to do: recognizing familiar faces, isolating a single voice in a crowded restaurant, navigating from point A to point B by means of transport, walking, language, etc.) Imagine hammering in a nail with a screwdriver. You could do it.. but not very well. When we design machines to solve problems we find difficult, we create solutions that don’t exist in the structure of our brain. It would be natural to expect that artificial minds would be better suited to solve some problems than us.
Other than that, I’m not sure what you’re arguing, since that “counterintuitive observation” is exactly what I was saying.