Extremely valuable I’d guess, but the whole problem is that alignment is still preparadigmatic. We don’t actually know yet what the well-defined nerd snipe questions we should be asking are.
I think that preparadigmatic research and paradigmatic research are two different skill sets, and most Highly Impressive People in mainstream STEM are masters at the later, not the former.
I do think we’re more paradigmatic than we were a year ago, and that we might transition fully some time soon. I’ve got a list of concrete experiments on modularity in ML systems I’d like run for example, and I think any ML savvy person could probably do those, no skill at thinking about fuzzy far mode things required.
So I’m not sure a sequence like this could be written today, but maybe in six months?
Extremely valuable I’d guess, but the whole problem is that alignment is still preparadigmatic. We don’t actually know yet what the well-defined nerd snipe questions we should be asking are.
I think that preparadigmatic research and paradigmatic research are two different skill sets, and most Highly Impressive People in mainstream STEM are masters at the later, not the former.
I do think we’re more paradigmatic than we were a year ago, and that we might transition fully some time soon. I’ve got a list of concrete experiments on modularity in ML systems I’d like run for example, and I think any ML savvy person could probably do those, no skill at thinking about fuzzy far mode things required.
So I’m not sure a sequence like this could be written today, but maybe in six months?