I would certainly rate it as plausible that we could create beings that are nearly perfectly conscious from a linguistic perspective.
I’m not really sure what you mean by “conscious from a linguistic perspective,” and judging from your responses to other comments I infer that neither are you.
So let me try some simpler questions: is Watson conscious from a Jeapordy-game-playing perspective? Is a system that can perfectly navigate a maze conscious from a maze-navigation perspective? Is an adding machine conscious from an adding-numbers-together perspective?
If you answer “no” to any of those, then I don’t know what you mean well enough to proceed, but it might help if you explain why not.
If your answers are “yes” to all of those, then I guess I agree that it’s logically possible for a system to be (in your terms) conscious from a linguistic perspective without necessarily being something I would consider near-human intelligence, but I’m very skeptical of our ability to actually build such a system, and I’m not too worried about the prospect.
By “conscious from a linguistic perspective”, I mean that we cannot distinguish it from a conscious being purely through linguistic interactions (ie Turing tests). I probably should have said “conscious from a linguistic (text-based) perspective” to be more precise.
Said differently, I expect that the set of cognitive activities required to support linguistic behavior that we can’t distinguish from, say, my own (supposing here that I’m a conscious being) in a sufficiently broad range of linguistic interactions correlates highly enough with any other measure of “this is a conscious being” I might care to use that any decision procedure that files a system capable of such activities/behavior as “nonconscious” will also file conscious beings that way.
I’m not really sure what you mean by “conscious from a linguistic perspective,” and judging from your responses to other comments I infer that neither are you.
So let me try some simpler questions: is Watson conscious from a Jeapordy-game-playing perspective? Is a system that can perfectly navigate a maze conscious from a maze-navigation perspective? Is an adding machine conscious from an adding-numbers-together perspective?
If you answer “no” to any of those, then I don’t know what you mean well enough to proceed, but it might help if you explain why not.
If your answers are “yes” to all of those, then I guess I agree that it’s logically possible for a system to be (in your terms) conscious from a linguistic perspective without necessarily being something I would consider near-human intelligence, but I’m very skeptical of our ability to actually build such a system, and I’m not too worried about the prospect.
By “conscious from a linguistic perspective”, I mean that we cannot distinguish it from a conscious being purely through linguistic interactions (ie Turing tests). I probably should have said “conscious from a linguistic (text-based) perspective” to be more precise.
OK, fair enough. The second response applies.
Said differently, I expect that the set of cognitive activities required to support linguistic behavior that we can’t distinguish from, say, my own (supposing here that I’m a conscious being) in a sufficiently broad range of linguistic interactions correlates highly enough with any other measure of “this is a conscious being” I might care to use that any decision procedure that files a system capable of such activities/behavior as “nonconscious” will also file conscious beings that way.