[Question] What’s the problem with Oracular AIs?

I have a superintelligent AI in a box, and I tell it by means of text to explain to me how to create a perfect battery. Its goal is not to maximize the number of perfect batteries or make me understand a perfect battery, its goal is to produce a string that will explain to anyone who reads it how to create a perfect battery. Its only output is the one string, and it will be due in one week. How do I die?

No comments.