(Upvoted since your questions seem reasonable and I’m not sure why you got downvoted.)
I see two ways to achieve some justifiable confidence in philosophical answers produced by superintelligent AI:
Solve metaphilosophy well enough that we achieve an understanding of philosophical reasoning on par with mathematical reason, and have ideas/systems analogous to formal proofs and mechanical proof checkers that we can use to check the ASI’s arguments.
We increase our own intelligence and philosophical competence until we can verify the ASI’s reasoning ourselves.
(Upvoted since your questions seem reasonable and I’m not sure why you got downvoted.)
I see two ways to achieve some justifiable confidence in philosophical answers produced by superintelligent AI:
Solve metaphilosophy well enough that we achieve an understanding of philosophical reasoning on par with mathematical reason, and have ideas/systems analogous to formal proofs and mechanical proof checkers that we can use to check the ASI’s arguments.
We increase our own intelligence and philosophical competence until we can verify the ASI’s reasoning ourselves.