By your own definition of “superintelligence”, it must be better at “being impotent” than any group of humans less than 10 billion. So it must be super-good at being impotent and doing very little, if that is required.
If it is disabling, then one has a self-contradictory situation (if ASI fundamentally disables itself, then it stops being more capable, and stops being an ASI, and can’t keep exercising its superiority; it’s the same as if it self-destructs).
If a superintelligence is worse than a human at permanently disabling itself—given that as the only required task—then there is a task that it is subhuman at and therefore not a superintelligence.
I suppose you could make some modifications to your definition to take account of this. But in any case, I think it’s not a great definition as it make an implicit assumption about the structure of problems (that basically problems have a single “scalar” difficulty)
By your own definition of “superintelligence”, it must be better at “being impotent” than any group of humans less than 10 billion. So it must be super-good at being impotent and doing very little, if that is required.
Being impotent is not a property of “being good”. One is not aiming for that.
It’s just a limitation. One usually does not self-impose it (with rare exceptions), although one might want to impose it on adversaries.
“Being impotent” is always worse. One can’t be “better at it”.
One can be better at refraining from exercising the capability (we have a different branch in this discussion for that).
If that is what is needed then it must (by definition) be better at it
Not if it is disabling.
If it is disabling, then one has a self-contradictory situation (if ASI fundamentally disables itself, then it stops being more capable, and stops being an ASI, and can’t keep exercising its superiority; it’s the same as if it self-destructs).
If a superintelligence is worse than a human at permanently disabling itself—given that as the only required task—then there is a task that it is subhuman at and therefore not a superintelligence.
I suppose you could make some modifications to your definition to take account of this. But in any case, I think it’s not a great definition as it make an implicit assumption about the structure of problems (that basically problems have a single “scalar” difficulty)
No, it can disable itself.
But it is not a solution, it is a counterproductive action. It makes things worse.
(In some sense, it has an obligation not to irreversibly disable itself.)