I imagine you could catch useful work with i) models of AI safety, or ii) analysis of failure modes, or something, though I’m obviously biased here.
I imagine you could catch useful work with i) models of AI safety, or ii) analysis of failure modes, or something, though I’m obviously biased here.