What happened to his concerns over safety, I wonder?
He doesn’t believe in a ‘sharp left turn’, which means he doesn’t consider general intelligence to be a discontinuous (latent) capability spike such that alignment becomes significantly more difficult after it occurs. To him, alignment is simply a somewhat harder empirical techniques problem like capabilities work is. I assume he imagines in behavior similar to current RLHF-ed models even as frontier labs have doubled or quadrupled the OOMs of optimization power applied to the creation of SOTA models.
He models (incrementalist) alignment research as “dual use”, and therefore effectively models capabilities and alignment as effectively the same measure.
He also expects humans to continue to exist once certain communities of humans achieve ASI, and imagines that the future will be ‘wild’. This is a very rare and strange model to have.
He is quite hawkish—he is incredibly focused on China not stealing AGI capabilities, and believes that private labs are going to be too incompetent to defend against Chinese infiltration. He prefers that the USGOV would take over the AGI development such that they can race effectively against AGI.
His model for take-off relies quite heavily on “trust the trendline” and estimating linear intelligence increases with more OOMs of optimization power (linear with respect to human intelligence growth from childhood to adulthood). Its not the best way to extrapolate what will happen, but it is a sensible concrete model he can use to talk to normal people and sound confident and not vague—a key skill if you are an investor, and an especially key skill for someone trying to make it in the SF scene. (Note he clearly states in the interview that he’s describing his modal model for how things will go and he does have uncertainty over how things will occur, but desires to be concrete about what is his modal expectation.)
He has claimed that running a VC firm means he can essentially run it as a “think tank” too, focused on better modeling (and perhaps influencing) the AGI ecosystem. Given his desire for a hyper-militarization of AGI research, it makes sense that he’d try to steer things in this direction using the money and influence he will have and build, as a founder of n investment firm.
So in summary, he isn’t concerned about safety because he prices it in as something about as difficult (or slightly more difficult than) capabilities work. This puts him in an ideal epistemic position to run a VC firm for AGI labs, since his optimism is what persuades investors to provide him money since they expect him to attempt to return them a profit.
Leopold’s interview with Dwarkesh is a very useful source of what’s going on in his mind.
He doesn’t believe in a ‘sharp left turn’, which means he doesn’t consider general intelligence to be a discontinuous (latent) capability spike such that alignment becomes significantly more difficult after it occurs. To him, alignment is simply a somewhat harder empirical techniques problem like capabilities work is. I assume he imagines in behavior similar to current RLHF-ed models even as frontier labs have doubled or quadrupled the OOMs of optimization power applied to the creation of SOTA models.
He models (incrementalist) alignment research as “dual use”, and therefore effectively models capabilities and alignment as effectively the same measure.
He also expects humans to continue to exist once certain communities of humans achieve ASI, and imagines that the future will be ‘wild’. This is a very rare and strange model to have.
He is quite hawkish—he is incredibly focused on China not stealing AGI capabilities, and believes that private labs are going to be too incompetent to defend against Chinese infiltration. He prefers that the USGOV would take over the AGI development such that they can race effectively against AGI.
His model for take-off relies quite heavily on “trust the trendline” and estimating linear intelligence increases with more OOMs of optimization power (linear with respect to human intelligence growth from childhood to adulthood). Its not the best way to extrapolate what will happen, but it is a sensible concrete model he can use to talk to normal people and sound confident and not vague—a key skill if you are an investor, and an especially key skill for someone trying to make it in the SF scene. (Note he clearly states in the interview that he’s describing his modal model for how things will go and he does have uncertainty over how things will occur, but desires to be concrete about what is his modal expectation.)
He has claimed that running a VC firm means he can essentially run it as a “think tank” too, focused on better modeling (and perhaps influencing) the AGI ecosystem. Given his desire for a hyper-militarization of AGI research, it makes sense that he’d try to steer things in this direction using the money and influence he will have and build, as a founder of n investment firm.
So in summary, he isn’t concerned about safety because he prices it in as something about as difficult (or slightly more difficult than) capabilities work. This puts him in an ideal epistemic position to run a VC firm for AGI labs, since his optimism is what persuades investors to provide him money since they expect him to attempt to return them a profit.