Because of what you can do with a train of thought.
“That mammoth is very dangerous, but would be tasty if I killed it.”
“I could kill it if I had the right weapon”
“What kind of weapon would work?”
As against.…
“That mammoth is very dangerous—run!”
Computer science is where this particular insight comes from. If you can lay down memories, execute loops and evaluate conditions, you can simulate anything. If you don’t have the ability to read your own output, you can’t.
If dolphins or chimps did have arbitrarily long chains of thought, they’d be able to do general reasoning, as we do.
The examples of corvids designing and making specialized tools after observing what they would need to solve specific problems (placement of an otherwise unaccessible treat) seem to demonstrate such chains of thought.
So what do you expect to be the signs of arbitrary general reasoning? Humans run out of memory eventually. If a dolphin or a chimp can do arbitrary reasoning but lacks the capacity to keep long-chains inside but for this, what would you expect to see. I’m still not sure what actual testable distinction would occur in these cases, although in so far as I can think of what might arguably be evidence, it looks like dolphins pass, as you can see in this article already linked to in this thread.
Let’s think about the computer that you’re using to look at this website. It’s able to do general purpose logic, which is in some ways quite a trivial thing to learn. It’s really quite poor at pattern matching, where we and essentially all intelligent animals excel. It is able to do fast data manipulation, reading its own output back.
As I’m sure you know, there’s a distinction between computing systems which, given enough memory, can simulate any other computing system and computing systems which can’t. Critical to the former is the ability to form a stored program of some description, and read it back and execute it. Computers that can do this can emulate any other computer, (albeit in a speed-challenged way in some cases).
Chimps and dolphins are undoubtedly smart, but for some reason they aren’t crossing the threshold to generality. Their minds can represent many things, but not (apparently) the full gamut of what we can do. You won’t find any chimps or dolphins discussing philosophy or computer science. My point actually is that humans went from making only relatively simple stone tools to discussing philosophy in an evolutionary eye-blink—there isn’t THAT much of a difference between the two states.
My observation is that when we think, we introspect. We think about our thinking. This allows thought to connect to thought, and form patterns. If you can do THAT, then you are able to form the matrix of thought that leads to being able to think about the kinds of things we discuss here.
This only can happen if you have a sufficiently strong introspective sense. If you haven’t got that, your thoughts remain dominated by the concrete world driven by your other senses.
Can I turn this on its head? A chimp has WAY more processing power than any supercomputer ever built, including the Watson machine that trounced various humans at jeopardy. The puzzle is why they can’t think about philosophy, not why we can. Our much vaunted generality is pretty borderline at times—humans are truly BAD at being rational, and incredibly slow at reasoning. Why is such a powerful piece of hardware as us so utterly incompetent at something so simple?
The reason, I believe, is that our brains are largely evolved to do something else. Our purpose is to sense the world, and rapidly come up with some appropriate response. We are vastly parallel machines which do pattern recognition and ultra-fast response, based on inherently slow switches. Introspection appears largely irrelevant to this. We probably evolved it only as a means of predicting what other humans and creatures would do, and only incidentally did it turn into a means of thinking about thinking.
What is the actual testable distinction? Hard to say, but once you gain the ability to reason independently from the senses, the ability to think about numbers—big numbers—is not that far away.
Something like the ability to grasp that there is no largest number is probably the threshold—the logic’s simple, but requires you to think of a number separately from the real world. Hard to know how to show whether dolphins might know this or not, I appreciate that. I think it’s essentially proven that dolphins are smart enough to understand the logical relationships between the pieces of this proof, as the relationships are simple, and they can grasp things of that complexity that are driven by the external world. But perhaps they can’t see their internal world well enough to be able to pull ‘number’ as an idea out from ‘two’ and ‘three’ (which are ideas that dolphins are surely able to get.), and then finish the puzzle.
Perhaps it’s not chains that are the issue, but the ability to abstract clear of the outside world and carry on going.
Because of what you can do with a train of thought.
“That mammoth is very dangerous, but would be tasty if I killed it.”
“I could kill it if I had the right weapon”
“What kind of weapon would work?”
As against.… “That mammoth is very dangerous—run!”
Computer science is where this particular insight comes from. If you can lay down memories, execute loops and evaluate conditions, you can simulate anything. If you don’t have the ability to read your own output, you can’t.
If dolphins or chimps did have arbitrarily long chains of thought, they’d be able to do general reasoning, as we do.
The examples of corvids designing and making specialized tools after observing what they would need to solve specific problems (placement of an otherwise unaccessible treat) seem to demonstrate such chains of thought.
So what do you expect to be the signs of arbitrary general reasoning? Humans run out of memory eventually. If a dolphin or a chimp can do arbitrary reasoning but lacks the capacity to keep long-chains inside but for this, what would you expect to see. I’m still not sure what actual testable distinction would occur in these cases, although in so far as I can think of what might arguably be evidence, it looks like dolphins pass, as you can see in this article already linked to in this thread.
Let’s think about the computer that you’re using to look at this website. It’s able to do general purpose logic, which is in some ways quite a trivial thing to learn. It’s really quite poor at pattern matching, where we and essentially all intelligent animals excel. It is able to do fast data manipulation, reading its own output back.
As I’m sure you know, there’s a distinction between computing systems which, given enough memory, can simulate any other computing system and computing systems which can’t. Critical to the former is the ability to form a stored program of some description, and read it back and execute it. Computers that can do this can emulate any other computer, (albeit in a speed-challenged way in some cases).
Chimps and dolphins are undoubtedly smart, but for some reason they aren’t crossing the threshold to generality. Their minds can represent many things, but not (apparently) the full gamut of what we can do. You won’t find any chimps or dolphins discussing philosophy or computer science. My point actually is that humans went from making only relatively simple stone tools to discussing philosophy in an evolutionary eye-blink—there isn’t THAT much of a difference between the two states.
My observation is that when we think, we introspect. We think about our thinking. This allows thought to connect to thought, and form patterns. If you can do THAT, then you are able to form the matrix of thought that leads to being able to think about the kinds of things we discuss here.
This only can happen if you have a sufficiently strong introspective sense. If you haven’t got that, your thoughts remain dominated by the concrete world driven by your other senses.
Can I turn this on its head? A chimp has WAY more processing power than any supercomputer ever built, including the Watson machine that trounced various humans at jeopardy. The puzzle is why they can’t think about philosophy, not why we can. Our much vaunted generality is pretty borderline at times—humans are truly BAD at being rational, and incredibly slow at reasoning. Why is such a powerful piece of hardware as us so utterly incompetent at something so simple?
The reason, I believe, is that our brains are largely evolved to do something else. Our purpose is to sense the world, and rapidly come up with some appropriate response. We are vastly parallel machines which do pattern recognition and ultra-fast response, based on inherently slow switches. Introspection appears largely irrelevant to this. We probably evolved it only as a means of predicting what other humans and creatures would do, and only incidentally did it turn into a means of thinking about thinking.
What is the actual testable distinction? Hard to say, but once you gain the ability to reason independently from the senses, the ability to think about numbers—big numbers—is not that far away.
Something like the ability to grasp that there is no largest number is probably the threshold—the logic’s simple, but requires you to think of a number separately from the real world. Hard to know how to show whether dolphins might know this or not, I appreciate that. I think it’s essentially proven that dolphins are smart enough to understand the logical relationships between the pieces of this proof, as the relationships are simple, and they can grasp things of that complexity that are driven by the external world. But perhaps they can’t see their internal world well enough to be able to pull ‘number’ as an idea out from ‘two’ and ‘three’ (which are ideas that dolphins are surely able to get.), and then finish the puzzle.
Perhaps it’s not chains that are the issue, but the ability to abstract clear of the outside world and carry on going.