I don’t interpret this as an attempt to make tangible progress on a research question, since it presents an environment and not an algorithm. It’s more like an actual specification of a (very small) subset of problems that are important. Without steps like this I think it’s very clear that alignment problems will NOT get solved—I think they’re probably (~90%) necessary but definitely not (~99.99%) sufficient.
I think this is well within the domain of problems that are valuable to solve for current ML models and deployments, and not in the domain of constraining superintelligences or even AGI. Because of this I wouldn’t say that this constitutes a strong signal that DeepMind will pay more attention to AI risk in the future.
I’m also inclined to think that any successful endeavor at friendliness will need both mathematical formalisms for what friendliness is (i.e. MIRI-style work) and technical tools and subtasks for implementing those formalisms (similar to those presented in this paper). So I’d say this paper is tangibly helpful and far from complete regardless of its position within DeepMind or the surrounding research community.
I don’t interpret this as an attempt to make tangible progress on a research question, since it presents an environment and not an algorithm. It’s more like an actual specification of a (very small) subset of problems that are important. Without steps like this I think it’s very clear that alignment problems will NOT get solved—I think they’re probably (~90%) necessary but definitely not (~99.99%) sufficient.
I think this is well within the domain of problems that are valuable to solve for current ML models and deployments, and not in the domain of constraining superintelligences or even AGI. Because of this I wouldn’t say that this constitutes a strong signal that DeepMind will pay more attention to AI risk in the future.
I’m also inclined to think that any successful endeavor at friendliness will need both mathematical formalisms for what friendliness is (i.e. MIRI-style work) and technical tools and subtasks for implementing those formalisms (similar to those presented in this paper). So I’d say this paper is tangibly helpful and far from complete regardless of its position within DeepMind or the surrounding research community.