Former OpenAI Superalignment Researcher: Superintelligence by 2030

Link post

The AGI race has begun. We are building machines that can think and reason. By 202526, these machines will outpace many college graduates. By the end of the decade, they will be smarter than you or I; we will have superintelligence, in the true sense of the word.

In the link provided, Leopold Aschenbrenner explains why he believes AGI is likely to arrive within the decade, with superintelligence following soon after. He does so in some detail; the website is well-organized, but the raw pdf is over 150 pages.

Leopold is a former member of OpenAI’s Superalignment team; he was fired in April for allegedly leaking company secrets. However, he contests that portrayal of events in a recent interview with Dwarkesh Patel, saying he leaked nothing of significance and was fired for other reasons.[1]

However, I am somewhat confused by the new business venture Leopold is now promoting, an “AGI Hedge Fund” aimed at generating strong returns based on his predictions of imminent AGI. In the Dwarkesh Patel interview, it sounds like his intention is to make sure financial resources are available to back AI alignment and any other moves necessary to help Humanity navigate a turbulent future. However, the discussion in the podcast mostly focuses on whether such a fund would truly generate useful financial returns.

If you read this post, Leopold[2], could you please clarify your intentions in founding this fund?

  1. ^

    Specifically he brings up a memo he sent to the old OpenAI board claiming OpenAI wasn’t taking security seriously enough. He was also one of very few OpenAI employees not to sign the letter asking for Sam Altman’s reinstatement last November, and of course, the entire OpenAI superaligment team has collapsed for various reasons as well.

  2. ^

    Leopold does have a LessWrong account, but hasn’t linked his new website here after some time. I hope he doesn’t mind me posting in his stead.