While you could say programs are pure syntax, they are executed on real machines and have real effects. If those capabilities don’t count as semantic content, I don’t know what does.
That is correct, you don’t know what semantic content is.
Care to explain?
Meaning.
The words on this page mean things. They are intended to refer to other things.
Oh. and how do you know that?
Meaning is assigned, it is not intrinsic to symbolic logic.
Assigned by us, I suppose? Then what makes us so special?
Anyway, that’s not the most important:
None of this means we might not someday build an artificial brain that gives rise to an artificial conscious mind. But it won’t be done on a von Neuman machine.
Of course not: von Neuman machines have limitations that would make them too slow. But even in principle? I have a few questions for you:
Do you think it is impossible to build a simulation of the human brain on a Von Neuman machine, accurate enough to predict the behaviour of an actual brain?
If it is possible, do you think it is impossible to link such a simulation to reality via an actual humanoid body? (The inputs would be the sensory system of the body, and the outputs would be the various actions performed by the body.)
If it is possible, do you think the result is concious? Why not?
Care to explain?
Oh. and how do you know that?
Assigned by us, I suppose? Then what makes us so special?
Anyway, that’s not the most important:
Of course not: von Neuman machines have limitations that would make them too slow. But even in principle? I have a few questions for you:
Do you think it is impossible to build a simulation of the human brain on a Von Neuman machine, accurate enough to predict the behaviour of an actual brain?
If it is possible, do you think it is impossible to link such a simulation to reality via an actual humanoid body? (The inputs would be the sensory system of the body, and the outputs would be the various actions performed by the body.)
If it is possible, do you think the result is concious? Why not?