One hypothesis is that consciousness evolved for the purpose of deception—Robin Hanson’s “The Elephant in the Brain” is a decent read on this, although it does not address the Hard Problem of Consciousness.
If that’s the case, we might circumvent its usefulness by having the right goals, or strong enough detection and norm-punishing behaviors. If we build factories that are closely monitored where faulty machines are destroyed or repaired, and our goal is output instead of survival of individual machines, then the machines being deceptive will not help with that goal.
If somehow the easy and hard versions of consciousness separate (i.e., things which don’t functionally look like the conscious part of human brains end up “having experience” or “having moral weight”), then this might not solve the problem even under the deception hypothesis.
Without a clear definition and measure of “consciousness”, it’s almost impossible to reason about tradeoffs and utility. But that won’t stop us!
This is the first time I’ve come across the point
Consciousness is presumably cheap and useful for getting something done, since we evolved to have it.
But I’m not sure that the “something” that it’s useful for getting done is actually what other conscious entities want.
One hypothesis is that consciousness evolved for the purpose of deception—Robin Hanson’s “The Elephant in the Brain” is a decent read on this, although it does not address the Hard Problem of Consciousness.
If that’s the case, we might circumvent its usefulness by having the right goals, or strong enough detection and norm-punishing behaviors. If we build factories that are closely monitored where faulty machines are destroyed or repaired, and our goal is output instead of survival of individual machines, then the machines being deceptive will not help with that goal.
If somehow the easy and hard versions of consciousness separate (i.e., things which don’t functionally look like the conscious part of human brains end up “having experience” or “having moral weight”), then this might not solve the problem even under the deception hypothesis.