> That variants of this approach are nonetheless of use to sub-superintelligence AI safety: 70%.
Yeah, that sounds reasonable, possibly even slightly too pessimistic.
> That variants of this approach are of use to superintelligent AI safety: 40%.
Assuming that superintelligent language-model-alikes are actually inherently dangerous, Iād be far less optimistic ā the obvious failure mode would be bargaining between the superintelligent AIs.
> That variants of this approach are nonetheless of use to sub-superintelligence AI safety: 70%.
Yeah, that sounds reasonable, possibly even slightly too pessimistic.
> That variants of this approach are of use to superintelligent AI safety: 40%.
Assuming that superintelligent language-model-alikes are actually inherently dangerous, Iād be far less optimistic ā the obvious failure mode would be bargaining between the superintelligent AIs.