What about “everything that can be destroyed by the truth should be”? There might be an inconsistency between saying maximally true things and not offending people. What is the priority on LW?
On a somewhat related note, I can see it already. You spend years carefully programming your AI, calculating it’s friendliness, making sure it is perfectly bayesian and perfectly honest. You are finally done. You turn it on and the first line it prints: Oh dear, you are quite ugly.
What about “everything that can be destroyed by the truth should be”?
That statement obviously only applies when there is falsehood to be destroyed. I’m sure that guy knows he is not especially pretty. Telling him he’s ugly may be truthful but it’s also kind of like yelling “You’re really hot” at the sun.
What about “everything that can be destroyed by the truth should be”? There might be an inconsistency between saying maximally true things and not offending people. What is the priority on LW?
On a somewhat related note, I can see it already. You spend years carefully programming your AI, calculating it’s friendliness, making sure it is perfectly bayesian and perfectly honest. You are finally done. You turn it on and the first line it prints: Oh dear, you are quite ugly.
That statement obviously only applies when there is falsehood to be destroyed. I’m sure that guy knows he is not especially pretty. Telling him he’s ugly may be truthful but it’s also kind of like yelling “You’re really hot” at the sun.