EDIT: I realise that you asked us to be gentle, and all I’ve done is point out a flaws. Feel free to ignore me.
You explore many interesting ideas, but none of them are backed up with enough evidence to be convincing. I doubt that anything you’ve said is correct. The first example of this is this statement:
Because the experience of consciousness is subjective, we can never “know for sure” that an entity is actually experiencing consciousness.
How do you know?
What if tomorrow a biologist worked out what caused conciousness and created a simple scan for it? What evidence do you have that would make you surprised if this happened?
First an entity must have a “self detector”; a pattern recognition computation structure which it uses to recognizes its own state of being an entity and of being the same entity over time.
Why? What is it that actually makes it impossible to have a concious (has qualia) entity that is not self-aware (knows some stuff about itself).
We can’t “know for sure” because consciousness is a subjective experience. The only way you could “know for sure” would be if you simulated an entity and so knew from how you put the simulation together that the entity you were simulating did experience self-consciousness.
So how does this hypothetical biologist calibrate his consciousness scanner? Calibrate it so that he “knows for sure” that it is reading consciousness correctly? His degree of certainty in the output of his consciousness scanner is limited by his degree of certainty in his calibration standards. Even if it worked perfectly.
In order to be aware of something, you need to detect something. To detect something you need to receive sensory data and then process that data via pattern recognition into detection or not detection.
To detect consciousness your hypothetical biologist needs a “consciousness scanner”. So does any would-be detector of any consciousness. That “consciousness scanner” has to have certain properties whether it is instantiated in electronics or in meat. Those properties include receipt of sufficient data and then pattern recognition on that data to determine a detection or a not detection. That pattern recognition will be subject to type 1 errors and type 2 errors.
A machine is an entirely different kind of being than an animal. It doesn’t need to search for food, it doesn’t have sex, it doesn’t have to fight to survive, etc…
Humans are good in ‘pattern recognition’, but are bad at arithmetics. With computers it’s the other way around. Suppose computers are ever going to become fast enough to match our capabilities, they will not suddenly become bad at math, like us. They will be even better at it!
So, because we are vastly different, there is no reason to assume that they’re ever going to experience the world like we do. We can program them that way, but then you just end up with a machine ‘pretending’ to be conscious.
I’m not saying machines can’t be conscious, just that their consciousness will be (or already is) entirely different from ours and they can only measure it against their own unique standards, it’s pointless to do it with ours.
EDIT: I realise that you asked us to be gentle, and all I’ve done is point out a flaws. Feel free to ignore me.
You explore many interesting ideas, but none of them are backed up with enough evidence to be convincing. I doubt that anything you’ve said is correct. The first example of this is this statement:
How do you know?
What if tomorrow a biologist worked out what caused conciousness and created a simple scan for it? What evidence do you have that would make you surprised if this happened?
Why? What is it that actually makes it impossible to have a concious (has qualia) entity that is not self-aware (knows some stuff about itself).
Recommended reading: http://lesswrong.com/lw/jl/what_is_evidence/
We can’t “know for sure” because consciousness is a subjective experience. The only way you could “know for sure” would be if you simulated an entity and so knew from how you put the simulation together that the entity you were simulating did experience self-consciousness.
So how does this hypothetical biologist calibrate his consciousness scanner? Calibrate it so that he “knows for sure” that it is reading consciousness correctly? His degree of certainty in the output of his consciousness scanner is limited by his degree of certainty in his calibration standards. Even if it worked perfectly.
In order to be aware of something, you need to detect something. To detect something you need to receive sensory data and then process that data via pattern recognition into detection or not detection.
To detect consciousness your hypothetical biologist needs a “consciousness scanner”. So does any would-be detector of any consciousness. That “consciousness scanner” has to have certain properties whether it is instantiated in electronics or in meat. Those properties include receipt of sufficient data and then pattern recognition on that data to determine a detection or a not detection. That pattern recognition will be subject to type 1 errors and type 2 errors.
A machine is an entirely different kind of being than an animal. It doesn’t need to search for food, it doesn’t have sex, it doesn’t have to fight to survive, etc…
Humans are good in ‘pattern recognition’, but are bad at arithmetics. With computers it’s the other way around. Suppose computers are ever going to become fast enough to match our capabilities, they will not suddenly become bad at math, like us. They will be even better at it!
So, because we are vastly different, there is no reason to assume that they’re ever going to experience the world like we do. We can program them that way, but then you just end up with a machine ‘pretending’ to be conscious.
I’m not saying machines can’t be conscious, just that their consciousness will be (or already is) entirely different from ours and they can only measure it against their own unique standards, it’s pointless to do it with ours.