I am not an AI safety researcher; more of a terrified spectator monitoring LessWrong for updates about the existential risk of unnaligned AGI (thanks a bunch HPMOR). That said, if it was a year away, I would jump into action. My initial thought would be to put almost all my net worth into a public awareness campaign. If we can cause enough trepidation in the general public, it’s possible we could delay the emergence of AGI by a few weeks or months. My goal is not to solve alignment, rather to prod AI researchers to implement basic safety concerns that might reduce S-Risk by 1 or 2 percent. Then… think deeply about whether I want to be alive for the most Interesting Time in human history.
I am not an AI safety researcher; more of a terrified spectator monitoring LessWrong for updates about the existential risk of unnaligned AGI (thanks a bunch HPMOR). That said, if it was a year away, I would jump into action. My initial thought would be to put almost all my net worth into a public awareness campaign. If we can cause enough trepidation in the general public, it’s possible we could delay the emergence of AGI by a few weeks or months. My goal is not to solve alignment, rather to prod AI researchers to implement basic safety concerns that might reduce S-Risk by 1 or 2 percent. Then… think deeply about whether I want to be alive for the most Interesting Time in human history.