Parallelism changes absolutely nothing other than speed of execution.
Strong AI is refuted because syntax is insufficient for semantics. Allowing the syntax to execute in parallel will not alter this because the refutation of strong AI attacks the logical basis for the strong AI hypothesis itself. If you are trying to build a television with tinker-toys it does not improve your chances to substitute higher quality tinker-toys for the older wooden ones. You will still never get a functional TV.
They do not actually have a physical non-von Neumann architecture. They are simulating a brain on simulated neurosynaptic cores on a simulated non-von Neumann architecture on a Blue Gene/Q super computer which consists of 64-bit PowerPC A2 processors connected in a toroidal network. No wonder it’s slow.
They are trying to reach “True North” and believe they are headed in the right direction but they do not know if the Compass they have built actually measures what they believe it measures. Nor do they know if once they get there True North will do what they want it to do. They do not even know how what they want to do does what it does but they believe if they use faster computers that will overcome their lack of knowledge of how actual minds arise out of actual brains, which they don’t know how they are constructed. Nor do they know how the actual neurons of which actual brains are constructed actually function in real life.
But they’re published. So… you know… there’s that.
If you cannot simulate round worms, do not know how neurons actually work and do not even know how memories are stored in natural brains you are in no danger of building Colossus.
People are highly susceptible to magical thinking. When the telegraph was invented people thought the mind was like the telegraph because...… magic is why. Building more and faster wires and better telegraph stations and connecting them in advanced topologies will not change the fact that you are living in a fantasy world.
Strong AI is refuted because syntax is insufficient for semantics.
Where the heck does that come from? What do you mean by “strong AI is refuted”, “syntax is insufficient for semantics”, and how does the former follow from the latter?
The strong AI hypothesis is that consciousness is the software running on the hardware of the brain. Therefore one does not need to know or understand how brains actually work to construct a living conscious mind. Thus any system that implements the right computer program with the right inputs and outputs has cognition in exactly the same literal sense that human beings have understanding, thought and memory. It was the belief of strong AI proponents such as Marvin Minski at MIT and others that they were literally creating minds when writing their programs. They felt no need to stoop so low as to poke around in actual brains and get their hands dirty.
Computers are syntactical machines. The programs they execute are pure syntax and have no semantic content. Meaning is assigned, it is not intrinsic to symbolic logic. That is it’s strength. Since (1) programs are pure syntax and have no semantic content and (2) minds do have semantic content and (3) syntax is neither sufficient for nor constitutive of semantics. It must follow that programs are not by themselves constitutive of, nor sufficient for, minds. The strong AI hypothesis is false.
Which means that IBM is wasting time, energy and money. But.… perhaps their efforts will result in spin off technology so not all is lost.
Computers are syntactical machines. The programs they execute are pure syntax and have no semantic content. Meaning is assigned, it is not intrinsic to symbolic logic. That is it’s strength. Since (1) programs are pure syntax and have no semantic content and (2) minds do have semantic content and (3) syntax is neither sufficient for nor constitutive of semantics. It must follow that programs are not by themselves constitutive of, nor sufficient for, minds.
How would one determine whether a given device/system has this “semantic content”? What kind of evidence should one look at? Inner structure? Only inputs and outputs? Something else?
I second fubarobfusco. While you could say programs are pure syntax, they are executed on real machines and have real effects. If those capabilities don’t count as semantic content, I don’t know what does.
So, I still don’t know what makes you so sure conciousness is impossible on an emulator. (Leaving aside the fact that using “strong AI” to talk about conciousness, instead of capabilities, is a bit strange.)
That is correct, you don’t know what semantic content is.
“I still don’t know what makes you so sure conciousness is impossible on an emulator.”
For the same reason that I know simulated fire will not burn anything. In order for us to create an artificial mind, which certainly must be possible, we must duplicate the causal relations that exist in real consciousnesses.
Let us imagine that you go to your doctor and he says, “You’re heart is shot. We need to replace it. Lucky for you we have miniature super computer we can stick into your chest that can simulate the pumping action of a real heart down to the atomic level. Every atom, every material, every gasket of a real pump is precisely emulated to an arbitrary degree of accuracy.”
“Sign here.”
Do you sign the consent form?
Simulation is not duplication. In order to duplicate the causal effects of real world processes it is not enough to represent them in symbolic notation. Which is all a program is. To duplicate the action of a lever on a mass it is not enough to represent that action to yourself on paper or in a computer. You have to actually build a physical lever in the physical world.
In order to duplicate conscious minds, which certainly must be due to the activity of real brains, you must duplicate those causal relations that allow real brains to give rise to the real world physical phenomenon we call consciousness. A representation of a brain is no more a real brain than a representation of a pump will ever pump a single drop of fluid.
None of this means we might not someday build an artificial brain that gives rise to an artificial conscious mind. But it won’t be done on a von Neuman machine. It will be done by creating real world objects that have the same causal functions that real world neurons or other structures in real brains do.
the real world physical phenomenon we call consciousness
I don’t know what you mean by “physical” here—for any other “physical phenomenon”—light, heat, magnetism, momentum, etc. - I could imagine a device that measures / detects it. I have no idea how one would go about making a device that detects the presence of consciousness.
In fact, I don’t see anything “consciousness” has in common with light, heat, magnetism, friction etc. that warrants grouping them in the same category. It would be like having a category for “watersnail-eating fish, and Switzerland”.
While you could say programs are pure syntax, they are executed on real machines and have real effects. If those capabilities don’t count as semantic content, I don’t know what does.
That is correct, you don’t know what semantic content is.
Care to explain?
Meaning.
The words on this page mean things. They are intended to refer to other things.
Oh. and how do you know that?
Meaning is assigned, it is not intrinsic to symbolic logic.
Assigned by us, I suppose? Then what makes us so special?
Anyway, that’s not the most important:
None of this means we might not someday build an artificial brain that gives rise to an artificial conscious mind. But it won’t be done on a von Neuman machine.
Of course not: von Neuman machines have limitations that would make them too slow. But even in principle? I have a few questions for you:
Do you think it is impossible to build a simulation of the human brain on a Von Neuman machine, accurate enough to predict the behaviour of an actual brain?
If it is possible, do you think it is impossible to link such a simulation to reality via an actual humanoid body? (The inputs would be the sensory system of the body, and the outputs would be the various actions performed by the body.)
If it is possible, do you think the result is concious? Why not?
Strong AI is refuted because syntax is insufficient for semantics.
A wild Aristotelian Teleologist appears!
Phrasing claims in the passive voice to lend an air of authority is grating to the educated ear.
Aside from stylistic concerns, though, I believe you’re claiming that electronic circuits don’t really mean anything. However, I’m not sure whether you’re making the testable claim that no arrangement of electronic circuits will ever perform complicated cross-domain optimization better than a human, or the untestable claim that no electronic circuit will ever really be able to think.
When the telegraph was invented people thought the mind was like the telegraph because...… magic is why.
Because the telegraph analogy is actually a pretty decent analogy.
Building more and faster wires and better telegraph stations and connecting them in advanced topologies will not change the fact that you are living in a fantasy world.
What makes you think a sufficiently large number of organized telegraph lines won’t act like a brain? Note that whether the number may be too large to actually fit on Earth is besides the point.
“Because the telegraph analogy is actually a pretty decent analogy.”
No it isn’t. Constructing analogies is for poets and fiction writers. Science does not construct analogies. The force on an accelerating mass isn’t analogous to F=ma, it IS F=ma. If what you said is true, that neurons are like telegraph stations and their dendrites the wires then it could not be true that neurons can communicate without a direct connection or “wire” between them. Neurons can communicate without any synaptic connection between them (See: “Neurons Talk Without Synapses”). Therefore the analogy is false.
“What makes you think a sufficiently large number of organized telegraph lines won’t act like a brain?”
Because that is an example of magical thinking. It is not based on a functional understanding of the phenomenon. “If I just pour more of chemical A into solution B I will get a bigger and better reaction.” We are strongly attracted to thinking like that. It’s probably why it took us thousands of years to really get how to do science properly.
No it isn’t. Constructing analogies is for poets and fiction writers. Science does not construct analogies. The force on an accelerating mass isn’t analogous to F=ma, it IS F=ma. If what you said is true, that neurons are like telegraph stations and their dendrites the wires then it could not be true that neurons can communicate without a direct connection or “wire” between them. Neurons can communicate without any synaptic connection between them (See: “Neurons Talk Without Synapses”). Therefore the analogy is false.
Science uses analogies all the time. For example, prior to the modern quantum mechanical model of the atom one had a variety of other models which were essentially analogies. The fact that analogies break down in some respects shouldn’t be surprising: they are analogies not exact copies.
It might be useful to give as an example an analogy that is closely connected to my own thesis work of counting Artin representations. It turns out that this is closely connected to the behavior of the units (that is elements that have inverses) in certain rings). For example, we can make the ring denoted as Z[2^(1/2)], which is formed by taking 1 and the square root of 2 and then taking all possible finite sums, differences and products of elements. Rings of this sort, where one takes all combinations of 1 with the square root of an integer are have been studied since the late 1700s. Now, it turns out that there are some not so obvious units in Z[2^(1/2)]. I claim that in this ring, 1+2^(1/2) is a unit.
It turns out that if instead one takes a ring in the following way: Take 1, and take 1/p for some prime p, and the form all products, sums and differences, one gets a ring that behaves in many ways similarly to the quadratic fields, but is much easier to analyze. The analogy breaks down pretty badly in some aspects, but in most ways is pretty good to the point where large classes of results in one setting translate into almost identical results in the other setting (although the proofs are often different and require much more machinery in the quadratic case) . So here we have in math, often seen as one of the most rigorous of disciplines, an analogy that is not just occurring at a pedagogical level but is actively helpful for research.
t is not based on a functional understanding of the phenomenon. “If I just pour more of chemical A into solution B I will get a bigger and better reaction.” We are strongly attracted to thinking like that. It’s probably why it took us thousands of years to really get how to do science properly.
You appear to be ignoring the bit where I noted “organized”. But actually, even without that your statement is wrong. Often we do get critical masses where behavior becomes different on a large scale. Indeed, the term “critical mass” occurs precisely because this occurs with enriched uranium or with plutonium. And there are many other examples. For example, shove enough hydrogen together and you get a star.
Parallelism changes absolutely nothing other than speed of execution.
Strong AI is refuted because syntax is insufficient for semantics. Allowing the syntax to execute in parallel will not alter this because the refutation of strong AI attacks the logical basis for the strong AI hypothesis itself. If you are trying to build a television with tinker-toys it does not improve your chances to substitute higher quality tinker-toys for the older wooden ones. You will still never get a functional TV.
They do not actually have a physical non-von Neumann architecture. They are simulating a brain on simulated neurosynaptic cores on a simulated non-von Neumann architecture on a Blue Gene/Q super computer which consists of 64-bit PowerPC A2 processors connected in a toroidal network. No wonder it’s slow.
They are trying to reach “True North” and believe they are headed in the right direction but they do not know if the Compass they have built actually measures what they believe it measures. Nor do they know if once they get there True North will do what they want it to do. They do not even know how what they want to do does what it does but they believe if they use faster computers that will overcome their lack of knowledge of how actual minds arise out of actual brains, which they don’t know how they are constructed. Nor do they know how the actual neurons of which actual brains are constructed actually function in real life.
But they’re published. So… you know… there’s that.
If you cannot simulate round worms, do not know how neurons actually work and do not even know how memories are stored in natural brains you are in no danger of building Colossus.
People are highly susceptible to magical thinking. When the telegraph was invented people thought the mind was like the telegraph because...… magic is why. Building more and faster wires and better telegraph stations and connecting them in advanced topologies will not change the fact that you are living in a fantasy world.
Where the heck does that come from? What do you mean by “strong AI is refuted”, “syntax is insufficient for semantics”, and how does the former follow from the latter?
“What do you mean by “strong AI is refuted”″
The strong AI hypothesis is that consciousness is the software running on the hardware of the brain. Therefore one does not need to know or understand how brains actually work to construct a living conscious mind. Thus any system that implements the right computer program with the right inputs and outputs has cognition in exactly the same literal sense that human beings have understanding, thought and memory. It was the belief of strong AI proponents such as Marvin Minski at MIT and others that they were literally creating minds when writing their programs. They felt no need to stoop so low as to poke around in actual brains and get their hands dirty.
Computers are syntactical machines. The programs they execute are pure syntax and have no semantic content. Meaning is assigned, it is not intrinsic to symbolic logic. That is it’s strength. Since (1) programs are pure syntax and have no semantic content and (2) minds do have semantic content and (3) syntax is neither sufficient for nor constitutive of semantics. It must follow that programs are not by themselves constitutive of, nor sufficient for, minds. The strong AI hypothesis is false.
Which means that IBM is wasting time, energy and money. But.… perhaps their efforts will result in spin off technology so not all is lost.
How would one determine whether a given device/system has this “semantic content”? What kind of evidence should one look at? Inner structure? Only inputs and outputs? Something else?
What on earth is “semantic content”?
Meaning.
The words on this page mean things. They are intended to refer to other things.
I second fubarobfusco. While you could say programs are pure syntax, they are executed on real machines and have real effects. If those capabilities don’t count as semantic content, I don’t know what does.
So, I still don’t know what makes you so sure conciousness is impossible on an emulator. (Leaving aside the fact that using “strong AI” to talk about conciousness, instead of capabilities, is a bit strange.)
That is correct, you don’t know what semantic content is.
“I still don’t know what makes you so sure conciousness is impossible on an emulator.”
For the same reason that I know simulated fire will not burn anything. In order for us to create an artificial mind, which certainly must be possible, we must duplicate the causal relations that exist in real consciousnesses.
Let us imagine that you go to your doctor and he says, “You’re heart is shot. We need to replace it. Lucky for you we have miniature super computer we can stick into your chest that can simulate the pumping action of a real heart down to the atomic level. Every atom, every material, every gasket of a real pump is precisely emulated to an arbitrary degree of accuracy.”
“Sign here.”
Do you sign the consent form?
Simulation is not duplication. In order to duplicate the causal effects of real world processes it is not enough to represent them in symbolic notation. Which is all a program is. To duplicate the action of a lever on a mass it is not enough to represent that action to yourself on paper or in a computer. You have to actually build a physical lever in the physical world.
In order to duplicate conscious minds, which certainly must be due to the activity of real brains, you must duplicate those causal relations that allow real brains to give rise to the real world physical phenomenon we call consciousness. A representation of a brain is no more a real brain than a representation of a pump will ever pump a single drop of fluid.
None of this means we might not someday build an artificial brain that gives rise to an artificial conscious mind. But it won’t be done on a von Neuman machine. It will be done by creating real world objects that have the same causal functions that real world neurons or other structures in real brains do.
How could it be any other way?
I don’t know what you mean by “physical” here—for any other “physical phenomenon”—light, heat, magnetism, momentum, etc. - I could imagine a device that measures / detects it. I have no idea how one would go about making a device that detects the presence of consciousness.
In fact, I don’t see anything “consciousness” has in common with light, heat, magnetism, friction etc. that warrants grouping them in the same category. It would be like having a category for “watersnail-eating fish, and Switzerland”.
Care to explain?
Oh. and how do you know that?
Assigned by us, I suppose? Then what makes us so special?
Anyway, that’s not the most important:
Of course not: von Neuman machines have limitations that would make them too slow. But even in principle? I have a few questions for you:
Do you think it is impossible to build a simulation of the human brain on a Von Neuman machine, accurate enough to predict the behaviour of an actual brain?
If it is possible, do you think it is impossible to link such a simulation to reality via an actual humanoid body? (The inputs would be the sensory system of the body, and the outputs would be the various actions performed by the body.)
If it is possible, do you think the result is concious? Why not?
A wild Aristotelian Teleologist appears!
Phrasing claims in the passive voice to lend an air of authority is grating to the educated ear.
Aside from stylistic concerns, though, I believe you’re claiming that electronic circuits don’t really mean anything. However, I’m not sure whether you’re making the testable claim that no arrangement of electronic circuits will ever perform complicated cross-domain optimization better than a human, or the untestable claim that no electronic circuit will ever really be able to think.
Because the telegraph analogy is actually a pretty decent analogy.
What makes you think a sufficiently large number of organized telegraph lines won’t act like a brain? Note that whether the number may be too large to actually fit on Earth is besides the point.
“Because the telegraph analogy is actually a pretty decent analogy.”
No it isn’t. Constructing analogies is for poets and fiction writers. Science does not construct analogies. The force on an accelerating mass isn’t analogous to F=ma, it IS F=ma. If what you said is true, that neurons are like telegraph stations and their dendrites the wires then it could not be true that neurons can communicate without a direct connection or “wire” between them. Neurons can communicate without any synaptic connection between them (See: “Neurons Talk Without Synapses”). Therefore the analogy is false.
“What makes you think a sufficiently large number of organized telegraph lines won’t act like a brain?”
Because that is an example of magical thinking. It is not based on a functional understanding of the phenomenon. “If I just pour more of chemical A into solution B I will get a bigger and better reaction.” We are strongly attracted to thinking like that. It’s probably why it took us thousands of years to really get how to do science properly.
Science uses analogies all the time. For example, prior to the modern quantum mechanical model of the atom one had a variety of other models which were essentially analogies. The fact that analogies break down in some respects shouldn’t be surprising: they are analogies not exact copies.
It might be useful to give as an example an analogy that is closely connected to my own thesis work of counting Artin representations. It turns out that this is closely connected to the behavior of the units (that is elements that have inverses) in certain rings). For example, we can make the ring denoted as Z[2^(1/2)], which is formed by taking 1 and the square root of 2 and then taking all possible finite sums, differences and products of elements. Rings of this sort, where one takes all combinations of 1 with the square root of an integer are have been studied since the late 1700s. Now, it turns out that there are some not so obvious units in Z[2^(1/2)]. I claim that in this ring, 1+2^(1/2) is a unit.
It turns out that if instead one takes a ring in the following way: Take 1, and take 1/p for some prime p, and the form all products, sums and differences, one gets a ring that behaves in many ways similarly to the quadratic fields, but is much easier to analyze. The analogy breaks down pretty badly in some aspects, but in most ways is pretty good to the point where large classes of results in one setting translate into almost identical results in the other setting (although the proofs are often different and require much more machinery in the quadratic case) . So here we have in math, often seen as one of the most rigorous of disciplines, an analogy that is not just occurring at a pedagogical level but is actively helpful for research.
You appear to be ignoring the bit where I noted “organized”. But actually, even without that your statement is wrong. Often we do get critical masses where behavior becomes different on a large scale. Indeed, the term “critical mass” occurs precisely because this occurs with enriched uranium or with plutonium. And there are many other examples. For example, shove enough hydrogen together and you get a star.