I think LW folks often underestimate the importance of serendipity, especially for pre-paradigmatic fields like AI Alignment.
You want people learning functional programming and compiler design and writing kernels and playing around with new things, instead of just learning the innards of ML models and reading other people’s alignment research.
You even want people to go very deep into tangential things, and become expert kernel designers or embedded systems engineers. This is how people become capable.
I think LW folks often underestimate the importance of serendipity, especially for pre-paradigmatic fields like AI Alignment.
You want people learning functional programming and compiler design and writing kernels and playing around with new things, instead of just learning the innards of ML models and reading other people’s alignment research.
You even want people to go very deep into tangential things, and become expert kernel designers or embedded systems engineers. This is how people become capable.