Makes me think: Wouldn’t it be rather recommendable, if instead of heading straight for an (risky) AGI, we worked on (safe) SIs and then have them solve the problem of Friendly AGI?
Makes me think:
Wouldn’t it be rather recommendable, if instead of heading straight for an (risky) AGI, we worked on (safe) SIs and then have them solve the problem of Friendly AGI?