Good points all; these are good reasons to work on AI safety (and of course as a theorist I’m very happy to think about interesting problems even if they don’t have immediate impact :-) I’m definitely interested in the short-term issues, and have been spending a lot of my research time lately thinking about fairness/privacy in ML. Inverse-RL/revealed preferences learning is also quite interesting, and I’d love to see some more theory results in the agnostic case.
Good points all; these are good reasons to work on AI safety (and of course as a theorist I’m very happy to think about interesting problems even if they don’t have immediate impact :-) I’m definitely interested in the short-term issues, and have been spending a lot of my research time lately thinking about fairness/privacy in ML. Inverse-RL/revealed preferences learning is also quite interesting, and I’d love to see some more theory results in the agnostic case.