I agree that superintelligent AIs running around is bad and can do damage, but why would the near-term risk posed by this particular development seem like itself cause for updating any particular way? Your post makes it sound like you are frightened by AutoGPT in particular and I don’t understand why.
I’m not frightened by autoGPT in particular. It’s the coming improvements to that approach that frighten me. But I’m even more encouraged by the alignment upsides if this becomes the dominant approach.
I agree that superintelligent AIs running around is bad and can do damage, but why would the near-term risk posed by this particular development seem like itself cause for updating any particular way? Your post makes it sound like you are frightened by AutoGPT in particular and I don’t understand why.
I’m not frightened by autoGPT in particular. It’s the coming improvements to that approach that frighten me. But I’m even more encouraged by the alignment upsides if this becomes the dominant approach.