Regarding 3), both of the AI-minded professors I spoke to at my university dismissed AI alignment work due to this epistemic issue.
Nothing related to AI safety taught here, but I’ll spend my free time at my PhD program going through MIRI’s reading list.
If you write up thoughts along the way, framings you find useful for understanding the reading list concepts, and any new ideas that come to mind, I think that would be a great submission for the AI Alignment prize :-)
Regarding 3), both of the AI-minded professors I spoke to at my university dismissed AI alignment work due to this epistemic issue.
Nothing related to AI safety taught here, but I’ll spend my free time at my PhD program going through MIRI’s reading list.
If you write up thoughts along the way, framings you find useful for understanding the reading list concepts, and any new ideas that come to mind, I think that would be a great submission for the AI Alignment prize :-)