The question of whether a human level AGI safety plan is workable is separate from the question of presence of ASI risk. Many AGI safety plans, not being impossibly watertight, rely on the AGI not being superintelligent, hence the distinction is crucial for the purpose of considering such plans. There is also some skepticism of it being possible to suddenly get an ASI, in which case the assumption of AGIs being approximately human level becomes implicit without getting imposed by necessity.
The plans for dealing with ASI risk are separate, they go through the successful building of safe human level AGIs, which are supposed to be the keystone of solving the rest of the problems in the nick of time (or gradually, for people who don’t expect fast emergence of superintelligence after AGI). The ASI risk then concerns reliability of the second kind of plans, of employing safe human level AGIs, rather than the first kind of plans, of building them.
The question of whether a human level AGI safety plan is workable is separate from the question of presence of ASI risk. Many AGI safety plans, not being impossibly watertight, rely on the AGI not being superintelligent, hence the distinction is crucial for the purpose of considering such plans. There is also some skepticism of it being possible to suddenly get an ASI, in which case the assumption of AGIs being approximately human level becomes implicit without getting imposed by necessity.
The plans for dealing with ASI risk are separate, they go through the successful building of safe human level AGIs, which are supposed to be the keystone of solving the rest of the problems in the nick of time (or gradually, for people who don’t expect fast emergence of superintelligence after AGI). The ASI risk then concerns reliability of the second kind of plans, of employing safe human level AGIs, rather than the first kind of plans, of building them.