We can’t “know for sure” because consciousness is a subjective experience. The only way you could “know for sure” would be if you simulated an entity and so knew from how you put the simulation together that the entity you were simulating did experience self-consciousness.
So how does this hypothetical biologist calibrate his consciousness scanner? Calibrate it so that he “knows for sure” that it is reading consciousness correctly? His degree of certainty in the output of his consciousness scanner is limited by his degree of certainty in his calibration standards. Even if it worked perfectly.
In order to be aware of something, you need to detect something. To detect something you need to receive sensory data and then process that data via pattern recognition into detection or not detection.
To detect consciousness your hypothetical biologist needs a “consciousness scanner”. So does any would-be detector of any consciousness. That “consciousness scanner” has to have certain properties whether it is instantiated in electronics or in meat. Those properties include receipt of sufficient data and then pattern recognition on that data to determine a detection or a not detection. That pattern recognition will be subject to type 1 errors and type 2 errors.
A machine is an entirely different kind of being than an animal. It doesn’t need to search for food, it doesn’t have sex, it doesn’t have to fight to survive, etc…
Humans are good in ‘pattern recognition’, but are bad at arithmetics. With computers it’s the other way around. Suppose computers are ever going to become fast enough to match our capabilities, they will not suddenly become bad at math, like us. They will be even better at it!
So, because we are vastly different, there is no reason to assume that they’re ever going to experience the world like we do. We can program them that way, but then you just end up with a machine ‘pretending’ to be conscious.
I’m not saying machines can’t be conscious, just that their consciousness will be (or already is) entirely different from ours and they can only measure it against their own unique standards, it’s pointless to do it with ours.
We can’t “know for sure” because consciousness is a subjective experience. The only way you could “know for sure” would be if you simulated an entity and so knew from how you put the simulation together that the entity you were simulating did experience self-consciousness.
So how does this hypothetical biologist calibrate his consciousness scanner? Calibrate it so that he “knows for sure” that it is reading consciousness correctly? His degree of certainty in the output of his consciousness scanner is limited by his degree of certainty in his calibration standards. Even if it worked perfectly.
In order to be aware of something, you need to detect something. To detect something you need to receive sensory data and then process that data via pattern recognition into detection or not detection.
To detect consciousness your hypothetical biologist needs a “consciousness scanner”. So does any would-be detector of any consciousness. That “consciousness scanner” has to have certain properties whether it is instantiated in electronics or in meat. Those properties include receipt of sufficient data and then pattern recognition on that data to determine a detection or a not detection. That pattern recognition will be subject to type 1 errors and type 2 errors.
A machine is an entirely different kind of being than an animal. It doesn’t need to search for food, it doesn’t have sex, it doesn’t have to fight to survive, etc…
Humans are good in ‘pattern recognition’, but are bad at arithmetics. With computers it’s the other way around. Suppose computers are ever going to become fast enough to match our capabilities, they will not suddenly become bad at math, like us. They will be even better at it!
So, because we are vastly different, there is no reason to assume that they’re ever going to experience the world like we do. We can program them that way, but then you just end up with a machine ‘pretending’ to be conscious.
I’m not saying machines can’t be conscious, just that their consciousness will be (or already is) entirely different from ours and they can only measure it against their own unique standards, it’s pointless to do it with ours.