For the physical world, I think there is a decent-sized space of “problems where we could ask an AGI questions, and good answers would be highly valuable, while betrayals would only waste a few resources”.
I agree that would be highly valuable from our current perspective (even though extremely low-value compared to what a Friendly AI could do, since it could only select a course of action that humans already thought of and humans are the ones who would need to carry it out).
So such an AI won’t kill us by giving us that advice, but it will kill us in other ways.
(Also, the screen itself will have to be restricted to only display the number, otherwise the AI can say something else and talk itself out of the box.)
I agree that would be highly valuable from our current perspective (even though extremely low-value compared to what a Friendly AI could do, since it could only select a course of action that humans already thought of and humans are the ones who would need to carry it out).
So such an AI won’t kill us by giving us that advice, but it will kill us in other ways.
(Also, the screen itself will have to be restricted to only display the number, otherwise the AI can say something else and talk itself out of the box.)