There is too much vagueness involved here. A better question would be if there is any reason to believe that even though evolution could create consciousness we can not.
No doubt we don’t know much about intelligence and consciousness. Do we even know enough to be able to tell that the use of the term “consciousness” makes sense? I don’t know. But what I know is that we know a lot about physics and biological evolution and that we know that we are physical and an effect of evolution.
We know a bit less about the relation between evolutionary processes and intelligence but we do know that there is an important difference and that the latter can utilize the former.
Given all that we know, is it reasonable to doubt the possibility that we can create “minds”, conscious and intelligent agents? I don’t think so.
A better question would be if there is any reason to believe that even though evolution could create consciousness we can not.
Very good point! Even if consciousness does require something mysterious and metaphysical we don’t know about, if it’s harnessed within us (and robustly passes from parent to child over billions of births), we can harness it elsewhere.
I reject the “Consciousness is really just computation” if you define computation as the operation of contemporary computers not brains, but I wholeheartedly agree that we are physical and an effect of evolution as is our subjective experience. I just don’t think that the mind/consciousness is solely the neural connections of ones brain. Cell metabolism and whole organism metabolism and the environment of that organism define the concious experience also. If it’s reduced to a neural net important factors will most certainly be lost.
Maybe not with humans, but definitely for octopuses!
(More seriously, depending on how seriously you take embodied cognition, there may be some small loss. I mean, we know that your gut bacteria influence your mood via the nerves to the gut; so there are connections. And once there are connections, it becomes much more plausible that cut connections may decrease consciousness. After a few weeks in a float tank, how conscious would you be? Not very...)
I’m pretty sure that you agree that none of this means that a human brain in a vat with proper connections to the environment, real or simulated, is inherently less conscious than one attached to a body.
I don’t take embodiment that far, no, but a simulated amputation in a simulation would seem as problematic as a real amputation in the real-world barring extraordinary intervention on the part of the simulation.
Well, that ought to be testable. If he upload a human, and the source of consciousness is lost, they should stop feeling it. Provided they’re honest, we can just ask them.
Do we even know enough to be able to tell that the use of the term “consciousness” makes sense? I don’t know.
Is there a better word than “consciousness” for the explanation for why (I think I) say “I see red” and “I am conscious”? I do (think I) claim those things, so there is a causal explanation.
I think any word would be better than “conciousness”! :) It really is a very confusing term, since it is often used (vaguely) to refer to quite different concepts.
Cognitive scientists often use it to mean something similar to “attention” or as the opposite of “unconscious”. This is an “implementation level” view—it refers to certain mechanisms used by the brain to process information.
Then there is what Ned Block calls “access consciousness”, “the phenomenon whereby information in our minds is accessible for verbal report, reasoning, and the control of behavior” (to quote Wikipedia). This is a “functional specification level” view: conciousness is correctly implemented if it lets you accurately describe the world around you or the state of your own mind.
Then finally there’s “phenomenological conciousness” or qualia or whatever you want to call it—the mystical secret sauce.
No doubt these are all interrelated in complicated ways, but it certainly does not help matter to use terminology which further blurs the distinction. Especially since they are not equally mysterious: the actual implementation in the brain will take a long time to figure out, and as for the qualia it’s hard to say even what a successful answer would look like. But at the functional specification level, it seems quite easy to give a (teleological) explanation. That is, it’s easy to see that an agent benefits from being able to represent the world (and be able to say “I see a red thing”) and to reason about itself (“each time I see a red thing I feel hungry”). So it’s not very mysterious that we have mental concepts for “what I’m currently feeling”, etc.
There is too much vagueness involved here. A better question would be if there is any reason to believe that even though evolution could create consciousness we can not.
No doubt we don’t know much about intelligence and consciousness. Do we even know enough to be able to tell that the use of the term “consciousness” makes sense? I don’t know. But what I know is that we know a lot about physics and biological evolution and that we know that we are physical and an effect of evolution.
We know a bit less about the relation between evolutionary processes and intelligence but we do know that there is an important difference and that the latter can utilize the former.
Given all that we know, is it reasonable to doubt the possibility that we can create “minds”, conscious and intelligent agents? I don’t think so.
Very good point! Even if consciousness does require something mysterious and metaphysical we don’t know about, if it’s harnessed within us (and robustly passes from parent to child over billions of births), we can harness it elsewhere.
I reject the “Consciousness is really just computation” if you define computation as the operation of contemporary computers not brains, but I wholeheartedly agree that we are physical and an effect of evolution as is our subjective experience. I just don’t think that the mind/consciousness is solely the neural connections of ones brain. Cell metabolism and whole organism metabolism and the environment of that organism define the concious experience also. If it’s reduced to a neural net important factors will most certainly be lost.
Does this mean that amputees should be less conscious?
Maybe not with humans, but definitely for octopuses!
(More seriously, depending on how seriously you take embodied cognition, there may be some small loss. I mean, we know that your gut bacteria influence your mood via the nerves to the gut; so there are connections. And once there are connections, it becomes much more plausible that cut connections may decrease consciousness. After a few weeks in a float tank, how conscious would you be? Not very...)
I’m pretty sure that you agree that none of this means that a human brain in a vat with proper connections to the environment, real or simulated, is inherently less conscious than one attached to a body.
I don’t take embodiment that far, no, but a simulated amputation in a simulation would seem as problematic as a real amputation in the real-world barring extraordinary intervention on the part of the simulation.
No but subjective conscious experience would change definitely.
Well, that ought to be testable. If he upload a human, and the source of consciousness is lost, they should stop feeling it. Provided they’re honest, we can just ask them.
That could very well be the case.
Well, you’re a p-zombie, you would say that.
Is there a better word than “consciousness” for the explanation for why (I think I) say “I see red” and “I am conscious”? I do (think I) claim those things, so there is a causal explanation.
I think any word would be better than “conciousness”! :) It really is a very confusing term, since it is often used (vaguely) to refer to quite different concepts.
Cognitive scientists often use it to mean something similar to “attention” or as the opposite of “unconscious”. This is an “implementation level” view—it refers to certain mechanisms used by the brain to process information.
Then there is what Ned Block calls “access consciousness”, “the phenomenon whereby information in our minds is accessible for verbal report, reasoning, and the control of behavior” (to quote Wikipedia). This is a “functional specification level” view: conciousness is correctly implemented if it lets you accurately describe the world around you or the state of your own mind.
Then finally there’s “phenomenological conciousness” or qualia or whatever you want to call it—the mystical secret sauce.
No doubt these are all interrelated in complicated ways, but it certainly does not help matter to use terminology which further blurs the distinction. Especially since they are not equally mysterious: the actual implementation in the brain will take a long time to figure out, and as for the qualia it’s hard to say even what a successful answer would look like. But at the functional specification level, it seems quite easy to give a (teleological) explanation. That is, it’s easy to see that an agent benefits from being able to represent the world (and be able to say “I see a red thing”) and to reason about itself (“each time I see a red thing I feel hungry”). So it’s not very mysterious that we have mental concepts for “what I’m currently feeling”, etc.