If you’re talking about an AI general enough to answer interesting questions, something that doesn’t just recite knowledge from a database, but something that can actually solve problems by using and synthesizing information in novel ways (which I assume you are, if you’re talking about preventing it from turning the Earth into a supercollider by putting limits on its resource usage), then you would need to solve the additional problem of constraining what questions it’s allowed to answer
Nitpick, to some extent we have weak AI that can within very narrow knowledge bases already answer interesting novel question. For example, the Robbins conjecture was proven using the assistance of an automated theorem prover. And Simon Colton made AI that were able to make new interesting mathematicial definitions and make conjectures about them (see this paper). There’s been similar work in biochemistry. So even very weak AIs can not only answer interesting questions but come with new questions themselves.
Nitpick, to some extent we have weak AI that can within very narrow knowledge bases already answer interesting novel question. For example, the Robbins conjecture was proven using the assistance of an automated theorem prover. And Simon Colton made AI that were able to make new interesting mathematicial definitions and make conjectures about them (see this paper). There’s been similar work in biochemistry. So even very weak AIs can not only answer interesting questions but come with new questions themselves.