Conceptual AI safety researchers aim to help orient the broader field of AI safety, but in doing so, they must wrestle with imprecise, nebulous, hard-to-define problems. Philosophers specialize in dealing with problems like these. The CAIS Philosophy supports PhD students, postdocs, and professors of philosophy to produce novel conceptual AI safety research.
This sequence is a collection of drafts written by the CAIS Philosophy Fellows meant to elicit feedback.