trust in humans over AI persists in many domains for a long time after ASI is achieved.
it may be that we’re just using the term superintelligence to mark different points, but if you mean strong superintelligence, the kind that could—after just being instantiated on earth, with no extra resources or help—find a route to transforming the sun if it wanted to: then i disagree for the reasons/background beliefs here.[1]
a value-aligned superintelligence directly creates utopia. an “intent-aligned” or otherwise non-agentic truthful superintelligence, if that were to happen, is most usefully used to directly tell you how to create a value-aligned agentic superintelligence.
it may be that we’re just using the term superintelligence to mark different points, but if you mean strong superintelligence, the kind that could—after just being instantiated on earth, with no extra resources or help—find a route to transforming the sun if it wanted to: then i disagree for the reasons/background beliefs here.[1]
the relevant quote: