I buy your argument for why dramatic enhancement is possible. I just don’t see how we get the time. I can barely see a route to a ban, and I can’t see a route to a ban through enough to prevent reckless rogue actors from building AGI within ten or twenty years.
And yes, this is crazy as a society. I really hope we get rapidly wiser. I think that’s possible; look at the way attitudes toward COVID shifted dramatically in about two weeks when the evidence became apparent, and people convinced their friends rapidly. Connor Leahy made some really good points about the nature of persuasion and societal belief formation in his interview on the previous episode of the same podcast. It’s in the second half of the podcast; the first half is super irritating as they get in an argument about the “nature of ethics” despite having nearly identical positions. I might write that up, too—it makes entirely different but equally valuable points IMO.
This is why I wrote a blog about enhancing adult intelligence at the end of 2023; I thought it was likely that we wouldn’t have enough time.
I’m just going to do the best I can to work on both these things. Being able to do a large number of edits at the same time is one of the key technologies for both germline and adult enhancement, which is what my company has been working on. And though it’s slow, we have made pretty significant progress in the last year including finding several previously unknown ways to get higher editing efficiency.
I still think the most likely way alignment gets solved is just smart people working on it NOW, but it would sure be unfortunate if we DO get a pause and no one has any game plan for what to do with that time.
One hope on the scale of decades is that strong germline engineering should offer an alternative vision to AGI. If the options are “make supergenius non-social alien” and “make many genius humans”, it ought to be clear that the latter is both much safer and gets most of the hypothetical benefits of the former.
Sorry if I sound overconfident. My actual considered belief is that AGI this decade is quite possible, and it is crazy overconfident in longer timeline predictions to not prepare seriously for that possibility.
Multigenerational stuff needs a way longer timeline. There’s a lot of space between three years and two generations.
I buy your argument for why dramatic enhancement is possible. I just don’t see how we get the time. I can barely see a route to a ban, and I can’t see a route to a ban through enough to prevent reckless rogue actors from building AGI within ten or twenty years.
And yes, this is crazy as a society. I really hope we get rapidly wiser. I think that’s possible; look at the way attitudes toward COVID shifted dramatically in about two weeks when the evidence became apparent, and people convinced their friends rapidly. Connor Leahy made some really good points about the nature of persuasion and societal belief formation in his interview on the previous episode of the same podcast. It’s in the second half of the podcast; the first half is super irritating as they get in an argument about the “nature of ethics” despite having nearly identical positions. I might write that up, too—it makes entirely different but equally valuable points IMO.
This is why I wrote a blog about enhancing adult intelligence at the end of 2023; I thought it was likely that we wouldn’t have enough time.
I’m just going to do the best I can to work on both these things. Being able to do a large number of edits at the same time is one of the key technologies for both germline and adult enhancement, which is what my company has been working on. And though it’s slow, we have made pretty significant progress in the last year including finding several previously unknown ways to get higher editing efficiency.
I still think the most likely way alignment gets solved is just smart people working on it NOW, but it would sure be unfortunate if we DO get a pause and no one has any game plan for what to do with that time.
Agreed and well said. Playing a number of different strategies simultaneously is the smart move. I’m glad you’re pursuing that line of research.
You people are somewhat crazy overconfident about humanity knowing enough to make AGI this decade. https://www.lesswrong.com/posts/sTDfraZab47KiRMmT/views-on-when-agi-comes-and-on-strategy-to-reduce
One hope on the scale of decades is that strong germline engineering should offer an alternative vision to AGI. If the options are “make supergenius non-social alien” and “make many genius humans”, it ought to be clear that the latter is both much safer and gets most of the hypothetical benefits of the former.
Sorry if I sound overconfident. My actual considered belief is that AGI this decade is quite possible, and it is crazy overconfident in longer timeline predictions to not prepare seriously for that possibility.
Multigenerational stuff needs a way longer timeline. There’s a lot of space between three years and two generations.