Well, of course since it’s a human natural-language definition it’s fuzzy (fuzzy meaning overloaded with several meanings since we tend to encounter all at once) and you can fulfill some parts but not others. But a rock is definitely no conscious because it doesn’t have a mental object that it labels itself that includes the system that does its computations, can’t examine or manipulate its thoughts, it’s not roughly modelable as a human (not the most egalitarian part of the definition, but there I think), and all that good stuff.
A cat probably does, probably can’t, and is. So it’s in the fuzzy zone of some people being able to go “look it’s conscious, it clearly exhibits living thing social responses like pain and love, therefore it’s conscious,” and other people going “But cats can’t do complicated things with their own thoughts because they’re not capable of representing things with language, so they’re not conscious.” It’s the Standard Definitional Dispute, caused not because consciousness is undefined but because it’s overdefined.
In order to make an AI that humans accepted as conscious you would probably have to get past the fuzzy zone and fulfill lots of properties, maybe even making it roughly modelable as a human to satisfy people (emotion, a little selfishness, those sorts of things). It would understand language, be able to examine and manipulate its own thoughts, have a “me” mental object, and generally try to have a human-like mental structure.
So yeah, I can’t measure consciousness, but I can measure the sub-definitions I know about and then say “this fulfills most of the conditions” or “this fulfills none of the conditions.”
EDIT: Also, I should note that actually building an AI with human emotion equivalents sounds like a bad idea, at least without a bigger AI to keep things safe.
Well, of course since it’s a human natural-language definition it’s fuzzy (fuzzy meaning overloaded with several meanings since we tend to encounter all at once) and you can fulfill some parts but not others. But a rock is definitely no conscious because it doesn’t have a mental object that it labels itself that includes the system that does its computations, can’t examine or manipulate its thoughts, it’s not roughly modelable as a human (not the most egalitarian part of the definition, but there I think), and all that good stuff.
A cat probably does, probably can’t, and is. So it’s in the fuzzy zone of some people being able to go “look it’s conscious, it clearly exhibits living thing social responses like pain and love, therefore it’s conscious,” and other people going “But cats can’t do complicated things with their own thoughts because they’re not capable of representing things with language, so they’re not conscious.” It’s the Standard Definitional Dispute, caused not because consciousness is undefined but because it’s overdefined.
In order to make an AI that humans accepted as conscious you would probably have to get past the fuzzy zone and fulfill lots of properties, maybe even making it roughly modelable as a human to satisfy people (emotion, a little selfishness, those sorts of things). It would understand language, be able to examine and manipulate its own thoughts, have a “me” mental object, and generally try to have a human-like mental structure.
So yeah, I can’t measure consciousness, but I can measure the sub-definitions I know about and then say “this fulfills most of the conditions” or “this fulfills none of the conditions.”
EDIT: Also, I should note that actually building an AI with human emotion equivalents sounds like a bad idea, at least without a bigger AI to keep things safe.