In retrospect, I think the key miracles that happened relative to 2 years ago were a combo of “Alignment generalizes further than capabilities” and “Human values are both simpler and less fragile than people used to think, primarily because a whole lot of evolutionary psychology speculation about human values and capabilities turned out be wrong, and more importantly humans were discovered to be more of a blank slate than people thought (I don’t endorse the original blank-slate idea, to be clear, but I do think a bounded version of the idea does actually work, more so than people thought 15-20 years ago.)
So in essence, I think what happened here is this miracle happening:
10. The Sharp Left Turn (the distribution shift associated with a rapid increase in capabilities) might not be that large of a leap. It could be that alignment properties tend to generalize across this distribution shift.
combined with Outer alignment being easy to implement via data on human values and trusting the generalization process, because it turned out alignment generalizes further than capability, and us being able to prevent deceptive/inner misalignment via the same process.
I admit I was quite surprised to the extent of my 2022 self not thinking too much about those miracles until 2023, with the final pieces of the puzzle being provided in August-September 2024.
Link on alignment generalizing further than capabilities below:
In retrospect, I think the key miracles that happened relative to 2 years ago were a combo of “Alignment generalizes further than capabilities” and “Human values are both simpler and less fragile than people used to think, primarily because a whole lot of evolutionary psychology speculation about human values and capabilities turned out be wrong, and more importantly humans were discovered to be more of a blank slate than people thought (I don’t endorse the original blank-slate idea, to be clear, but I do think a bounded version of the idea does actually work, more so than people thought 15-20 years ago.)
So in essence, I think what happened here is this miracle happening:
combined with Outer alignment being easy to implement via data on human values and trusting the generalization process, because it turned out alignment generalizes further than capability, and us being able to prevent deceptive/inner misalignment via the same process.
I admit I was quite surprised to the extent of my 2022 self not thinking too much about those miracles until 2023, with the final pieces of the puzzle being provided in August-September 2024.
Link on alignment generalizing further than capabilities below:
https://www.beren.io/2024-05-15-Alignment-Likely-Generalizes-Further-Than-Capabilities/