They just bicker endlessly about uncertainty. “can you really know that 1+1=2?”.
I agree with you that I don’t think a AGI would have the same problems humans have with the concept of truth. However, what you described is neither the issues philosophers raise nor the sorts of big-universe issues the AI might get stuck on.
But wouldn’t that actually support my approach? Assuming that there really is something important that all of humanity misses but the AI understands:
-If you hardcode the AI’s optimal goal based on human deliberations you are guaranteed to miss this important thing.
-If you use the method I suggested, the AI will, driven by the desire to speak the truth, try to explain the problem to the humans who will in turn tell the AI what they think of that.
I agree with you that I don’t think a AGI would have the same problems humans have with the concept of truth. However, what you described is neither the issues philosophers raise nor the sorts of big-universe issues the AI might get stuck on.
But wouldn’t that actually support my approach? Assuming that there really is something important that all of humanity misses but the AI understands:
-If you hardcode the AI’s optimal goal based on human deliberations you are guaranteed to miss this important thing.
-If you use the method I suggested, the AI will, driven by the desire to speak the truth, try to explain the problem to the humans who will in turn tell the AI what they think of that.
I don’t see how that’s relivant to philosophical questions about truth. Did you mean to reply to my other comment?