What is the essential difference between human and animal intelligence? I don’t actually think it’s just a matter of degree. To put it simply, most brains are once-through machines. They take input from the senses, process it in conjunction with memories, and turn that into actions, and perhaps new memories. Their brains have lots of special-purpose optimizations for many things, and a surprising amount can be achieved like this. The brains are once-through largely because that’s the fastest approach, and speed is important for many things. Human brains are still mostly once-through.
But we humans have one extra trick, which is to do with self-awareness. We can to an extent sense the output of our brains, and that output then becomes new input. This in turn leads to new output which can become input again. This apparently simple capability—forming a loop—is all that’s needed to form a Turing-complete machine out of the specialized animal brain.
Without such a loop, an animal may know many things, but it will not know that it knows them. Because it isn’t able to sense explicitly about it was just thinking about, it can’t then start off a new thought based on the contents of the previous one.
The divide isn’t absolute, I’m sure—I believe essentially all mammals have quite a bit of self-awareness, but only in humans does that facility seem to be good enough to allow the development of a chain of thought. And that small difference makes all the difference in the world.
Chimps can suss out recursive puzzles where you have color-coded keys and locks, and you need to unlock Box A to get Key B to unlock Box B to get Key C to unlock Box C which contains food. They even choose the right box to unlock when one chain leads to the food and the other doesn’t.
Sorry, there’s not a difference of kind to be found here.
How much training is necessary for them to do this? Humans can reason this out without any training, if the chimps had to be trained substantially (e.g. first starting with one box, being rewarded with food, then starting with two boxes, etc.) then I think this would constitute a difference.
Well, one could argue that humans “train” for similar problems throughout their lives… Would you expect a feral child to figure that out straight away?
The divide isn’t absolute, I’m sure—I believe essentially all mammals have quite a bit of self-awareness, but only in humans does that facility seem to be good enough to allow the development of a chain of thought.
If dolphins or chimps did or did not have chains of thought how would be able to tell the difference?
Because of what you can do with a train of thought.
“That mammoth is very dangerous, but would be tasty if I killed it.”
“I could kill it if I had the right weapon”
“What kind of weapon would work?”
As against.…
“That mammoth is very dangerous—run!”
Computer science is where this particular insight comes from. If you can lay down memories, execute loops and evaluate conditions, you can simulate anything. If you don’t have the ability to read your own output, you can’t.
If dolphins or chimps did have arbitrarily long chains of thought, they’d be able to do general reasoning, as we do.
The examples of corvids designing and making specialized tools after observing what they would need to solve specific problems (placement of an otherwise unaccessible treat) seem to demonstrate such chains of thought.
So what do you expect to be the signs of arbitrary general reasoning? Humans run out of memory eventually. If a dolphin or a chimp can do arbitrary reasoning but lacks the capacity to keep long-chains inside but for this, what would you expect to see. I’m still not sure what actual testable distinction would occur in these cases, although in so far as I can think of what might arguably be evidence, it looks like dolphins pass, as you can see in this article already linked to in this thread.
Let’s think about the computer that you’re using to look at this website. It’s able to do general purpose logic, which is in some ways quite a trivial thing to learn. It’s really quite poor at pattern matching, where we and essentially all intelligent animals excel. It is able to do fast data manipulation, reading its own output back.
As I’m sure you know, there’s a distinction between computing systems which, given enough memory, can simulate any other computing system and computing systems which can’t. Critical to the former is the ability to form a stored program of some description, and read it back and execute it. Computers that can do this can emulate any other computer, (albeit in a speed-challenged way in some cases).
Chimps and dolphins are undoubtedly smart, but for some reason they aren’t crossing the threshold to generality. Their minds can represent many things, but not (apparently) the full gamut of what we can do. You won’t find any chimps or dolphins discussing philosophy or computer science. My point actually is that humans went from making only relatively simple stone tools to discussing philosophy in an evolutionary eye-blink—there isn’t THAT much of a difference between the two states.
My observation is that when we think, we introspect. We think about our thinking. This allows thought to connect to thought, and form patterns. If you can do THAT, then you are able to form the matrix of thought that leads to being able to think about the kinds of things we discuss here.
This only can happen if you have a sufficiently strong introspective sense. If you haven’t got that, your thoughts remain dominated by the concrete world driven by your other senses.
Can I turn this on its head? A chimp has WAY more processing power than any supercomputer ever built, including the Watson machine that trounced various humans at jeopardy. The puzzle is why they can’t think about philosophy, not why we can. Our much vaunted generality is pretty borderline at times—humans are truly BAD at being rational, and incredibly slow at reasoning. Why is such a powerful piece of hardware as us so utterly incompetent at something so simple?
The reason, I believe, is that our brains are largely evolved to do something else. Our purpose is to sense the world, and rapidly come up with some appropriate response. We are vastly parallel machines which do pattern recognition and ultra-fast response, based on inherently slow switches. Introspection appears largely irrelevant to this. We probably evolved it only as a means of predicting what other humans and creatures would do, and only incidentally did it turn into a means of thinking about thinking.
What is the actual testable distinction? Hard to say, but once you gain the ability to reason independently from the senses, the ability to think about numbers—big numbers—is not that far away.
Something like the ability to grasp that there is no largest number is probably the threshold—the logic’s simple, but requires you to think of a number separately from the real world. Hard to know how to show whether dolphins might know this or not, I appreciate that. I think it’s essentially proven that dolphins are smart enough to understand the logical relationships between the pieces of this proof, as the relationships are simple, and they can grasp things of that complexity that are driven by the external world. But perhaps they can’t see their internal world well enough to be able to pull ‘number’ as an idea out from ‘two’ and ‘three’ (which are ideas that dolphins are surely able to get.), and then finish the puzzle.
Perhaps it’s not chains that are the issue, but the ability to abstract clear of the outside world and carry on going.
What is the essential difference between human and animal intelligence? I don’t actually think it’s just a matter of degree. To put it simply, most brains are once-through machines. They take input from the senses, process it in conjunction with memories, and turn that into actions, and perhaps new memories. Their brains have lots of special-purpose optimizations for many things, and a surprising amount can be achieved like this. The brains are once-through largely because that’s the fastest approach, and speed is important for many things. Human brains are still mostly once-through.
But we humans have one extra trick, which is to do with self-awareness. We can to an extent sense the output of our brains, and that output then becomes new input. This in turn leads to new output which can become input again. This apparently simple capability—forming a loop—is all that’s needed to form a Turing-complete machine out of the specialized animal brain.
Without such a loop, an animal may know many things, but it will not know that it knows them. Because it isn’t able to sense explicitly about it was just thinking about, it can’t then start off a new thought based on the contents of the previous one.
The divide isn’t absolute, I’m sure—I believe essentially all mammals have quite a bit of self-awareness, but only in humans does that facility seem to be good enough to allow the development of a chain of thought. And that small difference makes all the difference in the world.
Chimps can suss out recursive puzzles where you have color-coded keys and locks, and you need to unlock Box A to get Key B to unlock Box B to get Key C to unlock Box C which contains food. They even choose the right box to unlock when one chain leads to the food and the other doesn’t.
Sorry, there’s not a difference of kind to be found here.
How much training is necessary for them to do this? Humans can reason this out without any training, if the chimps had to be trained substantially (e.g. first starting with one box, being rewarded with food, then starting with two boxes, etc.) then I think this would constitute a difference.
Well, one could argue that humans “train” for similar problems throughout their lives… Would you expect a feral child to figure that out straight away?
But then, there are plenty of examples of chimps exhibiting behavior that implies intelligence.
If dolphins or chimps did or did not have chains of thought how would be able to tell the difference?
Because of what you can do with a train of thought.
“That mammoth is very dangerous, but would be tasty if I killed it.”
“I could kill it if I had the right weapon”
“What kind of weapon would work?”
As against.… “That mammoth is very dangerous—run!”
Computer science is where this particular insight comes from. If you can lay down memories, execute loops and evaluate conditions, you can simulate anything. If you don’t have the ability to read your own output, you can’t.
If dolphins or chimps did have arbitrarily long chains of thought, they’d be able to do general reasoning, as we do.
The examples of corvids designing and making specialized tools after observing what they would need to solve specific problems (placement of an otherwise unaccessible treat) seem to demonstrate such chains of thought.
So what do you expect to be the signs of arbitrary general reasoning? Humans run out of memory eventually. If a dolphin or a chimp can do arbitrary reasoning but lacks the capacity to keep long-chains inside but for this, what would you expect to see. I’m still not sure what actual testable distinction would occur in these cases, although in so far as I can think of what might arguably be evidence, it looks like dolphins pass, as you can see in this article already linked to in this thread.
Let’s think about the computer that you’re using to look at this website. It’s able to do general purpose logic, which is in some ways quite a trivial thing to learn. It’s really quite poor at pattern matching, where we and essentially all intelligent animals excel. It is able to do fast data manipulation, reading its own output back.
As I’m sure you know, there’s a distinction between computing systems which, given enough memory, can simulate any other computing system and computing systems which can’t. Critical to the former is the ability to form a stored program of some description, and read it back and execute it. Computers that can do this can emulate any other computer, (albeit in a speed-challenged way in some cases).
Chimps and dolphins are undoubtedly smart, but for some reason they aren’t crossing the threshold to generality. Their minds can represent many things, but not (apparently) the full gamut of what we can do. You won’t find any chimps or dolphins discussing philosophy or computer science. My point actually is that humans went from making only relatively simple stone tools to discussing philosophy in an evolutionary eye-blink—there isn’t THAT much of a difference between the two states.
My observation is that when we think, we introspect. We think about our thinking. This allows thought to connect to thought, and form patterns. If you can do THAT, then you are able to form the matrix of thought that leads to being able to think about the kinds of things we discuss here.
This only can happen if you have a sufficiently strong introspective sense. If you haven’t got that, your thoughts remain dominated by the concrete world driven by your other senses.
Can I turn this on its head? A chimp has WAY more processing power than any supercomputer ever built, including the Watson machine that trounced various humans at jeopardy. The puzzle is why they can’t think about philosophy, not why we can. Our much vaunted generality is pretty borderline at times—humans are truly BAD at being rational, and incredibly slow at reasoning. Why is such a powerful piece of hardware as us so utterly incompetent at something so simple?
The reason, I believe, is that our brains are largely evolved to do something else. Our purpose is to sense the world, and rapidly come up with some appropriate response. We are vastly parallel machines which do pattern recognition and ultra-fast response, based on inherently slow switches. Introspection appears largely irrelevant to this. We probably evolved it only as a means of predicting what other humans and creatures would do, and only incidentally did it turn into a means of thinking about thinking.
What is the actual testable distinction? Hard to say, but once you gain the ability to reason independently from the senses, the ability to think about numbers—big numbers—is not that far away.
Something like the ability to grasp that there is no largest number is probably the threshold—the logic’s simple, but requires you to think of a number separately from the real world. Hard to know how to show whether dolphins might know this or not, I appreciate that. I think it’s essentially proven that dolphins are smart enough to understand the logical relationships between the pieces of this proof, as the relationships are simple, and they can grasp things of that complexity that are driven by the external world. But perhaps they can’t see their internal world well enough to be able to pull ‘number’ as an idea out from ‘two’ and ‘three’ (which are ideas that dolphins are surely able to get.), and then finish the puzzle.
Perhaps it’s not chains that are the issue, but the ability to abstract clear of the outside world and carry on going.