Perhaps our main difference is that you seem to believe in computationalism, while I don’t. I think consciousness is something fundamentally different from a computer program or any other kind of information. It’s experience, which is beyond information.
Draw a boundary around the part of your brain that apparently contains more than compute because it produces those sentences. This presumably excludes your visual cortex, your episodic memory, and some other parts. There are now machine models that can recognize faces with mere compute, so probably the part of you that suggests that a cloud looks like a face is also on the outside. I expect you could produce that sense of having experience even if you didn’t have language to put it into words, so we should be able to pull your language cortex out of the boundary without pulling out anything but compute.
The outside only works in terms of information. It increasingly looks like you can shrink the boundary until you could replace the inside with a rock that says “I sure seem to be having experiences.”, without any changes to what information crosses the boundary. Whatever purpose evolution might have had for equipping us with such a sense, it seems easier for it to put in an illusion than to actually implement something that, to all appearances, isn’t made of atoms.
“There are now machine models that can recognize faces with mere compute, so probably the part of you that suggests that a cloud looks like a face is also on the outside.”
Modern computers could theoretically do anything that a human does, except experience it. I can’t draw a line around the part of my brain responsible for it because there is probably none, it’s all of it. Even though I’m no neurologist. But from the little I know the brain has an integrated architecture.
Maybe in the future we could make conscious silicon machines (or of whatever material), but I still maintain that the brain is not a Turing machine—or at least not only.
“The outside only works in terms of information.”
Could be. The mind processes information, but it is not information (this is an intuitive opinion, and so is yours).
“Whatever purpose evolution might have had for equipping us with such a sense, it seems easier for it to put in an illusion than to actually implement something that, to all appearances, isn’t made of atoms.”
Now we’ve arrived at my favorite part of the computationalist discourse: to claim or suggest that consciousness is an illusion. I think that all that can’t be an illusion is consciousness. All that certainly exists is consciousness.
As for being made of atoms or not, well, information isn’t, either. But it’s expressed by atoms, and so is consciousness.
If one might make a conscious being out of Silicon but not out of a Turing machine, what happens when you run the laws of physics on a Turing machine and have simulated humans arise for the same reason they did in our universe, which have conversations like ours?
I think that all that can’t be an illusion is consciousness. All that certainly exists is consciousness.
What do you mean by “certainly exists”? One sure could subject someone to an illusion that he is not being subjected to an illusion.
“if one might make a conscious being out of Silicon but not out of a Turing machine”
I also doubt that btw.
“what happens when you run the laws of physics on a Turing machine and have simulated humans arise”
Is physics computable? That’s an open question.
And more importantly, there’s no guarantee that the laws of physics would necessarily generate conscious beings.
Even if it did, could be p-zombies.
“What do you mean by “certainly exists”? One sure could subject someone to an illusion that he is not being subjected to an illusion.”
True. But as long as you have someone, it’s no longer an illusion. It’s like, if you stimulate your pleasure centers with an electrode, and you say “hmmm that feels good”, was the pleasure an illusion? No. It may have been physically an illusion, but not experientially, and the latter is what really matters. Experience is what really matters, or is at least enough to make something real. That consciousness exists is undeniable. “I think, therefore I am.” Experience is the basis of all fact.
Do you agree that there is a set of equations that precisely describes the universe? You can compute the solutions for any system of differential equations through an infinite series of ever finer approximations.
there’s no guarantee that the laws of physics would necessarily generate conscious beings
The Turing machine might calculate the entire tree of all timelines, including this conversation. Do you suggest that there is a manner in which one can run a universe, that only starts to make a difference once life gets far enough, without which the people in it would fail to talk about consciousness?
If we wrote out a complete log of that tree on a ludicrously large piece of paper, and then walked over to the portion of it that describes this conversation, I am not claiming that we should treat the transcript as something worth protecting. I’m claiming that whatever the characters in the transcript have, that’s all we have.
Still, that could all happen with philosophical zombies. A computer agent (AI) doesn’t sleep and can function forever. These 2 factors is what leads me to believe that computers, as we currently define them, won’t ever be alive, even if they ever come to emulate the world perfectly. At best they’ll produce p-zombies.
Perhaps our main difference is that you seem to believe in computationalism, while I don’t. I think consciousness is something fundamentally different from a computer program or any other kind of information. It’s experience, which is beyond information.
Draw a boundary around the part of your brain that apparently contains more than compute because it produces those sentences. This presumably excludes your visual cortex, your episodic memory, and some other parts. There are now machine models that can recognize faces with mere compute, so probably the part of you that suggests that a cloud looks like a face is also on the outside. I expect you could produce that sense of having experience even if you didn’t have language to put it into words, so we should be able to pull your language cortex out of the boundary without pulling out anything but compute.
The outside only works in terms of information. It increasingly looks like you can shrink the boundary until you could replace the inside with a rock that says “I sure seem to be having experiences.”, without any changes to what information crosses the boundary. Whatever purpose evolution might have had for equipping us with such a sense, it seems easier for it to put in an illusion than to actually implement something that, to all appearances, isn’t made of atoms.
“There are now machine models that can recognize faces with mere compute, so probably the part of you that suggests that a cloud looks like a face is also on the outside.”
Modern computers could theoretically do anything that a human does, except experience it. I can’t draw a line around the part of my brain responsible for it because there is probably none, it’s all of it. Even though I’m no neurologist. But from the little I know the brain has an integrated architecture.
Maybe in the future we could make conscious silicon machines (or of whatever material), but I still maintain that the brain is not a Turing machine—or at least not only.
“The outside only works in terms of information.”
Could be. The mind processes information, but it is not information (this is an intuitive opinion, and so is yours).
“Whatever purpose evolution might have had for equipping us with such a sense, it seems easier for it to put in an illusion than to actually implement something that, to all appearances, isn’t made of atoms.”
Now we’ve arrived at my favorite part of the computationalist discourse: to claim or suggest that consciousness is an illusion. I think that all that can’t be an illusion is consciousness. All that certainly exists is consciousness.
As for being made of atoms or not, well, information isn’t, either. But it’s expressed by atoms, and so is consciousness.
If one might make a conscious being out of Silicon but not out of a Turing machine, what happens when you run the laws of physics on a Turing machine and have simulated humans arise for the same reason they did in our universe, which have conversations like ours?
What do you mean by “certainly exists”? One sure could subject someone to an illusion that he is not being subjected to an illusion.
“if one might make a conscious being out of Silicon but not out of a Turing machine”
I also doubt that btw.
“what happens when you run the laws of physics on a Turing machine and have simulated humans arise”
Is physics computable? That’s an open question.
And more importantly, there’s no guarantee that the laws of physics would necessarily generate conscious beings.
Even if it did, could be p-zombies.
“What do you mean by “certainly exists”? One sure could subject someone to an illusion that he is not being subjected to an illusion.”
True. But as long as you have someone, it’s no longer an illusion. It’s like, if you stimulate your pleasure centers with an electrode, and you say “hmmm that feels good”, was the pleasure an illusion? No. It may have been physically an illusion, but not experientially, and the latter is what really matters. Experience is what really matters, or is at least enough to make something real. That consciousness exists is undeniable. “I think, therefore I am.” Experience is the basis of all fact.
Do you agree that there is a set of equations that precisely describes the universe? You can compute the solutions for any system of differential equations through an infinite series of ever finer approximations.
The Turing machine might calculate the entire tree of all timelines, including this conversation. Do you suggest that there is a manner in which one can run a universe, that only starts to make a difference once life gets far enough, without which the people in it would fail to talk about consciousness?
If we wrote out a complete log of that tree on a ludicrously large piece of paper, and then walked over to the portion of it that describes this conversation, I am not claiming that we should treat the transcript as something worth protecting. I’m claiming that whatever the characters in the transcript have, that’s all we have.
Still, that could all happen with philosophical zombies. A computer agent (AI) doesn’t sleep and can function forever. These 2 factors is what leads me to believe that computers, as we currently define them, won’t ever be alive, even if they ever come to emulate the world perfectly. At best they’ll produce p-zombies.