One example Yudkowsky provides is that of an AI initially designed to solve the Riemann hypothesis, which, upon being upgraded or upgrading itself with superhuman intelligence, tries to develop molecular nanotechnology because it wants to convert all matter in the Solar System into computing material to solve the problem, killing the humans who asked the question.
Cousin_it’s approach may be enough to avoid that.
From the FAI wikipedia page:
Cousin_it’s approach may be enough to avoid that.