Effective Altruism has actively been trying to penetrate academia. There are several people basically working in Academia full-time (mainly around GPI, CSER, CHAI, and FHI) focused on EA, but very few focused on LessWrong-style rationality. It seems to me like the main candidates for people to do this all decided to work on AI safety directly.
I’d note that in order to introduce “LW-style rationality” to Academia, you’d probably want to chunk it up and focus accordingly. I think epistemics is basically one subset.
I personally expect much of the valuable work on Rationality/Epistemics to come from nonprofits and individuals, not academic institutions.
A few quick thoughts here:
Effective Altruism has actively been trying to penetrate academia. There are several people basically working in Academia full-time (mainly around GPI, CSER, CHAI, and FHI) focused on EA, but very few focused on LessWrong-style rationality. It seems to me like the main candidates for people to do this all decided to work on AI safety directly.
I’d note that in order to introduce “LW-style rationality” to Academia, you’d probably want to chunk it up and focus accordingly. I think epistemics is basically one subset.
I personally expect much of the valuable work on Rationality/Epistemics to come from nonprofits and individuals, not academic institutions.