RSS

nsage

Karma: 6

I believe the only answer to the question “how would humans much smarter than us solve the alignment problem” is this: they would simply make themselves smarter; if they did build AGI, they would always ensure it was far less intelligent than them.

Hence, the problem is avoided with this maxim: simply always be smarter than the things you build.

Have fron­tier AI sys­tems sur­passed the self-repli­cat­ing red line?

nsage11 Jan 2025 5:31 UTC
4 points
0 comments4 min readLW link