Deutsch argues of “the target ability” that “the information for how to achieve it must be encoded in the relatively tiny number of differences between the DNA of humans and that of chimpanzees”.
That makes no sense. Maybe a bigger brain alone would enable cumulative cultural evolution—and so all that would be needed is some more “add brain here” instructions. Yet: “make more of this” is hardly the secret of intelligence. So: Deutsch’s argument here is not coherent.
Looking into the difference between human genes and chimpanzee genes probably won’t help much with developing machine intelligence. Nor would it be much help in deciding how big the difference is.
The chimpanzee gene pool doesn’t support cumulative cultural evolution, while human gene pool does. However, all that means is that chimpanzees are one side of the cultural “tipping point”—while humans are on the other. Crossing such a threshold may not require additional complex machinery. It might just need an instruction of the form: “delay brain development”—since brains can now develop safely in baby slings.
I don’t think Deutsch is arguing that looking at the differences between human and chimpanzee genomes is a promising path for AGI insights; he’s just saying that there might not be all that much insight needed to get to AGI, since there don’t seem to be huge differences in cognitive algorithms between chimpanzees and humans. Even a culturally-isolated feral child (e.g. Dani) has qualitatively more intelligence than a chimpanzee, and can be taught crafts, sports, etc. — and language, to a more limited degree (as far as we know so far; there are very few cases).
It is true that there might not be all that much insight needed to get to AGI on top of the insight needed to build a chimpanzee. The problem that Deutsch is neglecting is that we have no idea about how to build a chimpanzee.
The point is that, almost paradoxically, computers so far have been good at doing tasks that are difficult for humans and impossible for chimps (difficult mathematical computations, chess, Jeopardy, etc.), yet they can’t do well at tasks which are trivial for chimps or even for dogs.
Which is actually not all that striking a revelation when you consider that when humans find something difficult, it is because it is a task or a procedure we were not built to do. It makes sense then that programs designed to do the hard things would not do them the way humans do. It’s the mind projection fallacy to assume that easy tasks are in fact easy, and hard tasks hard.
Which is actually not all that striking a revelation when you consider that when humans find something difficult, it is because it is a task or a procedure we were not built to do.
But this doesn’t explain why we can build and program computers to do it much better than we do.
We are actually quite good at inventing algorithmic procedures to solve problems that we find difficult, but we suck at executing them, while we excel at doing things we can’t easily describe as algorithmic procedures. In fact, inventing algorithmic procedures is perhaps the hardest task to describe algorithmically, due to various theoretical results from computability and complexity theory and some empirical evidence.
Some people consider this fact as evidence that the human mind is fundamentally non-computational. I don’t share this view, mainly for physical reasons, but I think it might have a grain of truth: While the human mind is probably computational in the sense that in principle (and maybe one day in practice) we could run low-level brain simulations on a computer, its architecture differs from the architecture of typical computer hardware and software in a non-trivial, and probably still poorly understood way. Even the most advanced modern machine learning algorithms are at best only crude approximations of what’s going on inside the human brain.
Maybe this means that we are missing some insight, some non-trivial property that sets apart intelligent processes than other typical computations, or maybe there is just no feasible way of obtaining human-level, or even chimp-level, intelligence without a neuromorphic architecture, a substantially low-level emulation of a brain.
Anyway, I think that this counterintuitive observation is probably the source of the over-optimistic predictions about AI: people, even experts, consistently underestimate how difficult apparently easy cognitive really are.
But this doesn’t explain why we can build and program computers to do it much better than we do.
It absolutely does. These are things that humans are not designed to do. (Things humans are designed to do: recognizing familiar faces, isolating a single voice in a crowded restaurant, navigating from point A to point B by means of transport, walking, language, etc.) Imagine hammering in a nail with a screwdriver. You could do it.. but not very well. When we design machines to solve problems we find difficult, we create solutions that don’t exist in the structure of our brain. It would be natural to expect that artificial minds would be better suited to solve some problems than us.
Other than that, I’m not sure what you’re arguing, since that “counterintuitive observation” is exactly what I was saying.
The David Deutsch article seems silly—as usual :-(
Deutsch argues of “the target ability” that “the information for how to achieve it must be encoded in the relatively tiny number of differences between the DNA of humans and that of chimpanzees”.
That makes no sense. Maybe a bigger brain alone would enable cumulative cultural evolution—and so all that would be needed is some more “add brain here” instructions. Yet: “make more of this” is hardly the secret of intelligence. So: Deutsch’s argument here is not coherent.
I think some parts of the article are wrong, but not that part, and I can’t parse your counterargument. Could you elaborate?
Looking into the difference between human genes and chimpanzee genes probably won’t help much with developing machine intelligence. Nor would it be much help in deciding how big the difference is.
The chimpanzee gene pool doesn’t support cumulative cultural evolution, while human gene pool does. However, all that means is that chimpanzees are one side of the cultural “tipping point”—while humans are on the other. Crossing such a threshold may not require additional complex machinery. It might just need an instruction of the form: “delay brain development”—since brains can now develop safely in baby slings.
Indeed, crossing the threshold might not have required gene changes at all—at the time. It probably just required increased population density—e.g. see: High Population Density Triggers Cultural Explosions.
I don’t think Deutsch is arguing that looking at the differences between human and chimpanzee genomes is a promising path for AGI insights; he’s just saying that there might not be all that much insight needed to get to AGI, since there don’t seem to be huge differences in cognitive algorithms between chimpanzees and humans. Even a culturally-isolated feral child (e.g. Dani) has qualitatively more intelligence than a chimpanzee, and can be taught crafts, sports, etc. — and language, to a more limited degree (as far as we know so far; there are very few cases).
It is true that there might not be all that much insight needed to get to AGI on top of the insight needed to build a chimpanzee. The problem that Deutsch is neglecting is that we have no idea about how to build a chimpanzee.
Oh I see what you mean. Well, I certainly agree with that!
The point is that, almost paradoxically, computers so far have been good at doing tasks that are difficult for humans and impossible for chimps (difficult mathematical computations, chess, Jeopardy, etc.), yet they can’t do well at tasks which are trivial for chimps or even for dogs.
Which is actually not all that striking a revelation when you consider that when humans find something difficult, it is because it is a task or a procedure we were not built to do. It makes sense then that programs designed to do the hard things would not do them the way humans do. It’s the mind projection fallacy to assume that easy tasks are in fact easy, and hard tasks hard.
But this doesn’t explain why we can build and program computers to do it much better than we do.
We are actually quite good at inventing algorithmic procedures to solve problems that we find difficult, but we suck at executing them, while we excel at doing things we can’t easily describe as algorithmic procedures.
In fact, inventing algorithmic procedures is perhaps the hardest task to describe algorithmically, due to various theoretical results from computability and complexity theory and some empirical evidence.
Some people consider this fact as evidence that the human mind is fundamentally non-computational. I don’t share this view, mainly for physical reasons, but I think it might have a grain of truth:
While the human mind is probably computational in the sense that in principle (and maybe one day in practice) we could run low-level brain simulations on a computer, its architecture differs from the architecture of typical computer hardware and software in a non-trivial, and probably still poorly understood way. Even the most advanced modern machine learning algorithms are at best only crude approximations of what’s going on inside the human brain.
Maybe this means that we are missing some insight, some non-trivial property that sets apart intelligent processes than other typical computations, or maybe there is just no feasible way of obtaining human-level, or even chimp-level, intelligence without a neuromorphic architecture, a substantially low-level emulation of a brain.
Anyway, I think that this counterintuitive observation is probably the source of the over-optimistic predictions about AI: people, even experts, consistently underestimate how difficult apparently easy cognitive really are.
It absolutely does. These are things that humans are not designed to do. (Things humans are designed to do: recognizing familiar faces, isolating a single voice in a crowded restaurant, navigating from point A to point B by means of transport, walking, language, etc.) Imagine hammering in a nail with a screwdriver. You could do it.. but not very well. When we design machines to solve problems we find difficult, we create solutions that don’t exist in the structure of our brain. It would be natural to expect that artificial minds would be better suited to solve some problems than us.
Other than that, I’m not sure what you’re arguing, since that “counterintuitive observation” is exactly what I was saying.