[copying from my comment on the EA Forum x-post]
For reference, some other lists of AI safety problems that can be tackled by non-AI people:
Luke Muehlhauser’s big (but somewhat old) list: “How to study superintelligence strategy”
AI Impacts has made several lists of research problems
Wei Dai’s, “Problems in AI Alignment that philosophers could potentially contribute to”
Kaj Sotala’s case for the relevance of psychology/cog sci to AI safety (I would add that Ought is currently testing the feasibility of IDA/Debate by doing psychological research)
Also relevant is Geoffrey Irving and Amanda Askell’s “AI Safety Needs Social Scientists.”
[copying from my comment on the EA Forum x-post]
For reference, some other lists of AI safety problems that can be tackled by non-AI people:
Luke Muehlhauser’s big (but somewhat old) list: “How to study superintelligence strategy”
AI Impacts has made several lists of research problems
Wei Dai’s, “Problems in AI Alignment that philosophers could potentially contribute to”
Kaj Sotala’s case for the relevance of psychology/cog sci to AI safety (I would add that Ought is currently testing the feasibility of IDA/Debate by doing psychological research)
Also relevant is Geoffrey Irving and Amanda Askell’s “AI Safety Needs Social Scientists.”