Most of human problems could be solved by humans or slightly above human AIs. Which practically useful tasks – which we could define now – require very high level of superintelligence? I see two main such tasks:
1) Prevention of the creation of other potentially dangerous superintelligences.
2) Solving the task of indefinite life extension, which include solving aging and uploading.
I could imagine other tasks, like near-light-speed space travel, but they are neither urgent nor necessary.
For which other tasks do we need superintelligence?
The difference between “slightly above human” and “very high level of superintelligence” is difficult to grasp, because we don’t have a good way to quantify intelligence and don’t have a good way to predict how much intelligence you need to achieve something. That said, some plausible candidates (in addition to the two you mentioned, which are reasonable) are:
Solving all other X-risks
Constructing a Dyson sphere or something else that will allow much more efficient and massive conversion of physical resources to human flourishing
Solving all problems of society/government/economics, except to the extent we want to solve them ourselves
Creating a way of life for everyone which is neither oppressive (like having to work in a boring and/or unpleasant job) nor dull or meaningless
Finding the optimal way to avert a Malthusian catastrophe while satisfying the human preferences for reproduction and immortality
Allowing us to modify/improve the minds of ourselves and our descendants, and/or create entirely new kinds of minds, while protecting us from losing our values and identities, or unintentionally triggering a moral catastrophe
Solving all moral conundrums involving animals, wild nature and other non-human minds, if such exist
Negotiating with aliens, if such exist (but that is probably very non-urgent)
Regarding near-light-speed space travel (and space colonization), it does seem necessary if you want to make the best use of the universe.
Also, I think Gurkenglas has a very good point regarding acausal trade.
Not even a superintelligent AI, but an Alpha* level AI could do a lot of good now, if it learned to understand humans without falling prey to the human biases. For example, an AI friend who knows just the right words to say in a given situation, never losing patience and never having own agenda would make the world a much better place almost instantly.
Producing a strategic advantage for any party at all that is decisive enough to safely disarm the threat of nuclear war.
Acausal trade on even footing with distant superintelligences.
If our physics happen to allow for an easy way to destroy the world, then the way we do science, someone will think of it, someone will talk, and someone will try it. If one superintelligent polymath did our research instead, we don’t lose automatically if some configuration of magnets, copper and glass can ignite the atmosphere.