As a brief aside, I don’t think there is a single “introduction to AI x-risk” resource that rigorously and compellingly presents, from start to finish, the core arguments around AI x-risk.
In general, +1 for the post, although “miracles” doesn’t feel like the right description of them, more like “reasons for hope”. (In general I’m pretty unimpressed by the “miracles” terminology, since nobody has a model of the future of AI robust enough such that it’s anywhere near reasonable to call violations of that model “miracles”).
I would argue that there are true miracles here, despite thinking we aren’t doomed: We know enough that alignment isn’t probably going to be solved by a simple trick, but that doesn’t mean the problem is impossible.
The biggest miracles would be, in order of being surprised:
Deceptive/Inner alignment either not proving to be a problem, or there’s a broad basin around honesty that’s easy to implement, such that we may not need too much interpretability, in the best case.
Causal, Extremal, and Adversarial Goodhart not being a problem, or easy to correct.
ELK is solved by default.
Outer Alignment being easy to implement via HCH in the real world via imitative amplification/IDA.
In retrospect, I think the key miracles that happened relative to 2 years ago were a combo of “Alignment generalizes further than capabilities” and “Human values are both simpler and less fragile than people used to think, primarily because a whole lot of evolutionary psychology speculation about human values and capabilities turned out be wrong, and more importantly humans were discovered to be more of a blank slate than people thought (I don’t endorse the original blank-slate idea, to be clear, but I do think a bounded version of the idea does actually work, more so than people thought 15-20 years ago.)
So in essence, I think what happened here is this miracle happening:
10. The Sharp Left Turn (the distribution shift associated with a rapid increase in capabilities) might not be that large of a leap. It could be that alignment properties tend to generalize across this distribution shift.
combined with Outer alignment being easy to implement via data on human values and trusting the generalization process, because it turned out alignment generalizes further than capability, and us being able to prevent deceptive/inner misalignment via the same process.
I admit I was quite surprised to the extent of my 2022 self not thinking too much about those miracles until 2023, with the final pieces of the puzzle being provided in August-September 2024.
Link on alignment generalizing further than capabilities below:
Any specific things you think The Alignment Problem from a Deep Learning Perspective misses?
In general, +1 for the post, although “miracles” doesn’t feel like the right description of them, more like “reasons for hope”. (In general I’m pretty unimpressed by the “miracles” terminology, since nobody has a model of the future of AI robust enough such that it’s anywhere near reasonable to call violations of that model “miracles”).
I would argue that there are true miracles here, despite thinking we aren’t doomed: We know enough that alignment isn’t probably going to be solved by a simple trick, but that doesn’t mean the problem is impossible.
The biggest miracles would be, in order of being surprised:
Deceptive/Inner alignment either not proving to be a problem, or there’s a broad basin around honesty that’s easy to implement, such that we may not need too much interpretability, in the best case.
Causal, Extremal, and Adversarial Goodhart not being a problem, or easy to correct.
ELK is solved by default.
Outer Alignment being easy to implement via HCH in the real world via imitative amplification/IDA.
In retrospect, I think the key miracles that happened relative to 2 years ago were a combo of “Alignment generalizes further than capabilities” and “Human values are both simpler and less fragile than people used to think, primarily because a whole lot of evolutionary psychology speculation about human values and capabilities turned out be wrong, and more importantly humans were discovered to be more of a blank slate than people thought (I don’t endorse the original blank-slate idea, to be clear, but I do think a bounded version of the idea does actually work, more so than people thought 15-20 years ago.)
So in essence, I think what happened here is this miracle happening:
combined with Outer alignment being easy to implement via data on human values and trusting the generalization process, because it turned out alignment generalizes further than capability, and us being able to prevent deceptive/inner misalignment via the same process.
I admit I was quite surprised to the extent of my 2022 self not thinking too much about those miracles until 2023, with the final pieces of the puzzle being provided in August-September 2024.
Link on alignment generalizing further than capabilities below:
https://www.beren.io/2024-05-15-Alignment-Likely-Generalizes-Further-Than-Capabilities/