A paperclip maximizer can create self-reproducing paperclip makers.
It’s quite imaginable that somewhere in the universe there are organisms which either resemble paperclips (maybe an intelligent gastropod with a paperclip-shaped shell) or which have a fundamental use for paperclip-like artefacts (they lay their eggs in a hardened tunnel dug in a paperclip shape). So while it is outlandish to imagine that the first AGI made by human beings will end up fetishizing an object which in our context is a useful but minor artefact, what we would call a “paperclip maximizer” might have a much higher probability of arising from that species, as a degenerated expression of some of its basic impulses.
The real question is, how likely is that, or indeed, how likely is any scenario in which superintelligence is employed to convert as much of the universe as possible to “X”—remembering that “interstellar civilizations populated by beings experiencing growth, choice, and joy” is also a possible value of X.
It would seem that universe-converting X-maximizers are a somewhat likely, but not an inevitable, outcome of a naturally intelligent species experiencing a technological singularity. But we don’t know how likely that is, and we don’t know what possible Xs are likely.
A paperclip maximizer can create self-reproducing paperclip makers.
It’s quite imaginable that somewhere in the universe there are organisms which either resemble paperclips (maybe an intelligent gastropod with a paperclip-shaped shell) or which have a fundamental use for paperclip-like artefacts (they lay their eggs in a hardened tunnel dug in a paperclip shape). So while it is outlandish to imagine that the first AGI made by human beings will end up fetishizing an object which in our context is a useful but minor artefact, what we would call a “paperclip maximizer” might have a much higher probability of arising from that species, as a degenerated expression of some of its basic impulses.
The real question is, how likely is that, or indeed, how likely is any scenario in which superintelligence is employed to convert as much of the universe as possible to “X”—remembering that “interstellar civilizations populated by beings experiencing growth, choice, and joy” is also a possible value of X.
It would seem that universe-converting X-maximizers are a somewhat likely, but not an inevitable, outcome of a naturally intelligent species experiencing a technological singularity. But we don’t know how likely that is, and we don’t know what possible Xs are likely.