Most promising way is just raising children better.
I highly doubt this would be very helpful in resolving the particular concerns Habryka has in mind. Namely, a world in which:
very short AI timelines (3-15 years) happen by default unless aggressive regulation is put in place, but even if it is, the likelihood of full compliance is not 100% and the development of AGI can be realistically delayed by at most ~ 1⁄2 generations before the risk of at least one large-scale defection having appeared becomes too high, so you don’t have time for slow cultural change that takes many decades to take effect
the AI alignment problem turns out to be very hard and basically unsolvable by unenhanced humans, no matter how smart they may be, so you need improvements that quickly generate a bunch of ultra-geniuses that are far smarter than their “parents” could ever be
I highly doubt this would be very helpful in resolving the particular concerns Habryka has in mind. Namely, a world in which:
very short AI timelines (3-15 years) happen by default unless aggressive regulation is put in place, but even if it is, the likelihood of full compliance is not 100% and the development of AGI can be realistically delayed by at most ~ 1⁄2 generations before the risk of at least one large-scale defection having appeared becomes too high, so you don’t have time for slow cultural change that takes many decades to take effect
the AI alignment problem turns out to be very hard and basically unsolvable by unenhanced humans, no matter how smart they may be, so you need improvements that quickly generate a bunch of ultra-geniuses that are far smarter than their “parents” could ever be