What makes you think there are any such ‘answers’, renderable in a form that you could identify?
And even if they do exist, why do you think a human being could fully grasp the explanation in finite time?
Edit: It seems quite possible that even the simplest such ‘answers’ could require many years of full time effort to understand, putting it beyond most if not all human memory capacity. i.e. By the end even those who ‘learned’ it will have forgotten many parts near the beginning.
(Upvoted since your questions seem reasonable and I’m not sure why you got downvoted.)
I see two ways to achieve some justifiable confidence in philosophical answers produced by superintelligent AI:
Solve metaphilosophy well enough that we achieve an understanding of philosophical reasoning on par with mathematical reason, and have ideas/systems analogous to formal proofs and mechanical proof checkers that we can use to check the ASI’s arguments.
We increase our own intelligence and philosophical competence until we can verify the ASI’s reasoning ourselves.
What makes you think there are any such ‘answers’, renderable in a form that you could identify?
And even if they do exist, why do you think a human being could fully grasp the explanation in finite time?
Edit: It seems quite possible that even the simplest such ‘answers’ could require many years of full time effort to understand, putting it beyond most if not all human memory capacity. i.e. By the end even those who ‘learned’ it will have forgotten many parts near the beginning.
(Upvoted since your questions seem reasonable and I’m not sure why you got downvoted.)
I see two ways to achieve some justifiable confidence in philosophical answers produced by superintelligent AI:
Solve metaphilosophy well enough that we achieve an understanding of philosophical reasoning on par with mathematical reason, and have ideas/systems analogous to formal proofs and mechanical proof checkers that we can use to check the ASI’s arguments.
We increase our own intelligence and philosophical competence until we can verify the ASI’s reasoning ourselves.