One hypothesis is that consciousness evolved for the purpose of deception—Robin Hanson’s “The Elephant in the Brain” is a decent read on this, although it does not address the Hard Problem of Consciousness.
If that’s the case, we might circumvent its usefulness by having the right goals, or strong enough detection and norm-punishing behaviors. If we build factories that are closely monitored where faulty machines are destroyed or repaired, and our goal is output instead of survival of individual machines, then the machines being deceptive will not help with that goal.
If somehow the easy and hard versions of consciousness separate (i.e., things which don’t functionally look like the conscious part of human brains end up “having experience” or “having moral weight”), then this might not solve the problem even under the deception hypothesis.
One hypothesis is that consciousness evolved for the purpose of deception—Robin Hanson’s “The Elephant in the Brain” is a decent read on this, although it does not address the Hard Problem of Consciousness.
If that’s the case, we might circumvent its usefulness by having the right goals, or strong enough detection and norm-punishing behaviors. If we build factories that are closely monitored where faulty machines are destroyed or repaired, and our goal is output instead of survival of individual machines, then the machines being deceptive will not help with that goal.
If somehow the easy and hard versions of consciousness separate (i.e., things which don’t functionally look like the conscious part of human brains end up “having experience” or “having moral weight”), then this might not solve the problem even under the deception hypothesis.