If a superintelligence is worse than a human at permanently disabling itself—given that as the only required task—then there is a task that it is subhuman at and therefore not a superintelligence.
I suppose you could make some modifications to your definition to take account of this. But in any case, I think it’s not a great definition as it make an implicit assumption about the structure of problems (that basically problems have a single “scalar” difficulty)
If a superintelligence is worse than a human at permanently disabling itself—given that as the only required task—then there is a task that it is subhuman at and therefore not a superintelligence.
I suppose you could make some modifications to your definition to take account of this. But in any case, I think it’s not a great definition as it make an implicit assumption about the structure of problems (that basically problems have a single “scalar” difficulty)
No, it can disable itself.
But it is not a solution, it is a counterproductive action. It makes things worse.
(In some sense, it has an obligation not to irreversibly disable itself.)