If the delusion is of the kind that all of us share it, we won’t be able to find it without building an AI.
You’re not understanding (or not believing) the power of such denial/delusion. If there’s a delusion that universal and compelling, we won’t be able to find it EVEN IF we build an AI.
I didn’t comment on Elizer’s post because it was equally misguided—if you’re so committed to a belief that you ignore a ton of “normal” evidence, you’re not going to be convinced by an AI, just because you read the source code. That’s “just” evidence like everything else, and you can always find rationalizations like misunderstood terms, hardware error, or that generalizations don’t apply to you.
If the delusion is of the kind that all of us share it, we won’t be able to find it without building an AI.
You’re not understanding (or not believing) the power of such denial/delusion. If there’s a delusion that universal and compelling, we won’t be able to find it EVEN IF we build an AI.
I didn’t comment on Elizer’s post because it was equally misguided—if you’re so committed to a belief that you ignore a ton of “normal” evidence, you’re not going to be convinced by an AI, just because you read the source code. That’s “just” evidence like everything else, and you can always find rationalizations like misunderstood terms, hardware error, or that generalizations don’t apply to you.
http://lesswrong.com/lw/xc/the_uses_of_fun_theory/