Strong AI is refuted because syntax is insufficient for semantics.
Where the heck does that come from? What do you mean by “strong AI is refuted”, “syntax is insufficient for semantics”, and how does the former follow from the latter?
The strong AI hypothesis is that consciousness is the software running on the hardware of the brain. Therefore one does not need to know or understand how brains actually work to construct a living conscious mind. Thus any system that implements the right computer program with the right inputs and outputs has cognition in exactly the same literal sense that human beings have understanding, thought and memory. It was the belief of strong AI proponents such as Marvin Minski at MIT and others that they were literally creating minds when writing their programs. They felt no need to stoop so low as to poke around in actual brains and get their hands dirty.
Computers are syntactical machines. The programs they execute are pure syntax and have no semantic content. Meaning is assigned, it is not intrinsic to symbolic logic. That is it’s strength. Since (1) programs are pure syntax and have no semantic content and (2) minds do have semantic content and (3) syntax is neither sufficient for nor constitutive of semantics. It must follow that programs are not by themselves constitutive of, nor sufficient for, minds. The strong AI hypothesis is false.
Which means that IBM is wasting time, energy and money. But.… perhaps their efforts will result in spin off technology so not all is lost.
Computers are syntactical machines. The programs they execute are pure syntax and have no semantic content. Meaning is assigned, it is not intrinsic to symbolic logic. That is it’s strength. Since (1) programs are pure syntax and have no semantic content and (2) minds do have semantic content and (3) syntax is neither sufficient for nor constitutive of semantics. It must follow that programs are not by themselves constitutive of, nor sufficient for, minds.
How would one determine whether a given device/system has this “semantic content”? What kind of evidence should one look at? Inner structure? Only inputs and outputs? Something else?
I second fubarobfusco. While you could say programs are pure syntax, they are executed on real machines and have real effects. If those capabilities don’t count as semantic content, I don’t know what does.
So, I still don’t know what makes you so sure conciousness is impossible on an emulator. (Leaving aside the fact that using “strong AI” to talk about conciousness, instead of capabilities, is a bit strange.)
That is correct, you don’t know what semantic content is.
“I still don’t know what makes you so sure conciousness is impossible on an emulator.”
For the same reason that I know simulated fire will not burn anything. In order for us to create an artificial mind, which certainly must be possible, we must duplicate the causal relations that exist in real consciousnesses.
Let us imagine that you go to your doctor and he says, “You’re heart is shot. We need to replace it. Lucky for you we have miniature super computer we can stick into your chest that can simulate the pumping action of a real heart down to the atomic level. Every atom, every material, every gasket of a real pump is precisely emulated to an arbitrary degree of accuracy.”
“Sign here.”
Do you sign the consent form?
Simulation is not duplication. In order to duplicate the causal effects of real world processes it is not enough to represent them in symbolic notation. Which is all a program is. To duplicate the action of a lever on a mass it is not enough to represent that action to yourself on paper or in a computer. You have to actually build a physical lever in the physical world.
In order to duplicate conscious minds, which certainly must be due to the activity of real brains, you must duplicate those causal relations that allow real brains to give rise to the real world physical phenomenon we call consciousness. A representation of a brain is no more a real brain than a representation of a pump will ever pump a single drop of fluid.
None of this means we might not someday build an artificial brain that gives rise to an artificial conscious mind. But it won’t be done on a von Neuman machine. It will be done by creating real world objects that have the same causal functions that real world neurons or other structures in real brains do.
the real world physical phenomenon we call consciousness
I don’t know what you mean by “physical” here—for any other “physical phenomenon”—light, heat, magnetism, momentum, etc. - I could imagine a device that measures / detects it. I have no idea how one would go about making a device that detects the presence of consciousness.
In fact, I don’t see anything “consciousness” has in common with light, heat, magnetism, friction etc. that warrants grouping them in the same category. It would be like having a category for “watersnail-eating fish, and Switzerland”.
While you could say programs are pure syntax, they are executed on real machines and have real effects. If those capabilities don’t count as semantic content, I don’t know what does.
That is correct, you don’t know what semantic content is.
Care to explain?
Meaning.
The words on this page mean things. They are intended to refer to other things.
Oh. and how do you know that?
Meaning is assigned, it is not intrinsic to symbolic logic.
Assigned by us, I suppose? Then what makes us so special?
Anyway, that’s not the most important:
None of this means we might not someday build an artificial brain that gives rise to an artificial conscious mind. But it won’t be done on a von Neuman machine.
Of course not: von Neuman machines have limitations that would make them too slow. But even in principle? I have a few questions for you:
Do you think it is impossible to build a simulation of the human brain on a Von Neuman machine, accurate enough to predict the behaviour of an actual brain?
If it is possible, do you think it is impossible to link such a simulation to reality via an actual humanoid body? (The inputs would be the sensory system of the body, and the outputs would be the various actions performed by the body.)
If it is possible, do you think the result is concious? Why not?
Where the heck does that come from? What do you mean by “strong AI is refuted”, “syntax is insufficient for semantics”, and how does the former follow from the latter?
“What do you mean by “strong AI is refuted”″
The strong AI hypothesis is that consciousness is the software running on the hardware of the brain. Therefore one does not need to know or understand how brains actually work to construct a living conscious mind. Thus any system that implements the right computer program with the right inputs and outputs has cognition in exactly the same literal sense that human beings have understanding, thought and memory. It was the belief of strong AI proponents such as Marvin Minski at MIT and others that they were literally creating minds when writing their programs. They felt no need to stoop so low as to poke around in actual brains and get their hands dirty.
Computers are syntactical machines. The programs they execute are pure syntax and have no semantic content. Meaning is assigned, it is not intrinsic to symbolic logic. That is it’s strength. Since (1) programs are pure syntax and have no semantic content and (2) minds do have semantic content and (3) syntax is neither sufficient for nor constitutive of semantics. It must follow that programs are not by themselves constitutive of, nor sufficient for, minds. The strong AI hypothesis is false.
Which means that IBM is wasting time, energy and money. But.… perhaps their efforts will result in spin off technology so not all is lost.
How would one determine whether a given device/system has this “semantic content”? What kind of evidence should one look at? Inner structure? Only inputs and outputs? Something else?
What on earth is “semantic content”?
Meaning.
The words on this page mean things. They are intended to refer to other things.
I second fubarobfusco. While you could say programs are pure syntax, they are executed on real machines and have real effects. If those capabilities don’t count as semantic content, I don’t know what does.
So, I still don’t know what makes you so sure conciousness is impossible on an emulator. (Leaving aside the fact that using “strong AI” to talk about conciousness, instead of capabilities, is a bit strange.)
That is correct, you don’t know what semantic content is.
“I still don’t know what makes you so sure conciousness is impossible on an emulator.”
For the same reason that I know simulated fire will not burn anything. In order for us to create an artificial mind, which certainly must be possible, we must duplicate the causal relations that exist in real consciousnesses.
Let us imagine that you go to your doctor and he says, “You’re heart is shot. We need to replace it. Lucky for you we have miniature super computer we can stick into your chest that can simulate the pumping action of a real heart down to the atomic level. Every atom, every material, every gasket of a real pump is precisely emulated to an arbitrary degree of accuracy.”
“Sign here.”
Do you sign the consent form?
Simulation is not duplication. In order to duplicate the causal effects of real world processes it is not enough to represent them in symbolic notation. Which is all a program is. To duplicate the action of a lever on a mass it is not enough to represent that action to yourself on paper or in a computer. You have to actually build a physical lever in the physical world.
In order to duplicate conscious minds, which certainly must be due to the activity of real brains, you must duplicate those causal relations that allow real brains to give rise to the real world physical phenomenon we call consciousness. A representation of a brain is no more a real brain than a representation of a pump will ever pump a single drop of fluid.
None of this means we might not someday build an artificial brain that gives rise to an artificial conscious mind. But it won’t be done on a von Neuman machine. It will be done by creating real world objects that have the same causal functions that real world neurons or other structures in real brains do.
How could it be any other way?
I don’t know what you mean by “physical” here—for any other “physical phenomenon”—light, heat, magnetism, momentum, etc. - I could imagine a device that measures / detects it. I have no idea how one would go about making a device that detects the presence of consciousness.
In fact, I don’t see anything “consciousness” has in common with light, heat, magnetism, friction etc. that warrants grouping them in the same category. It would be like having a category for “watersnail-eating fish, and Switzerland”.
Care to explain?
Oh. and how do you know that?
Assigned by us, I suppose? Then what makes us so special?
Anyway, that’s not the most important:
Of course not: von Neuman machines have limitations that would make them too slow. But even in principle? I have a few questions for you:
Do you think it is impossible to build a simulation of the human brain on a Von Neuman machine, accurate enough to predict the behaviour of an actual brain?
If it is possible, do you think it is impossible to link such a simulation to reality via an actual humanoid body? (The inputs would be the sensory system of the body, and the outputs would be the various actions performed by the body.)
If it is possible, do you think the result is concious? Why not?