Which brings me to my question: do we program an AI to only tell us the truth or to lie when the AI believes (with high certainty) the lie will lead us to a net benefit over our expected lifetime?
This question is too anthropomorphic. Since you clearly mean a strong AI, it’s no longer a question-answering machine, it’s an engine for a supervised universe. At which point you ought to talk about the organization of the future, for example of fun theory.
This question is too anthropomorphic. Since you clearly mean a strong AI, it’s no longer a question-answering machine, it’s an engine for a supervised universe. At which point you ought to talk about the organization of the future, for example of fun theory.