Perhaps your team could have helped write the safety part? Or to deliberate whether the weights and code should be made public?
The name of the paper is very meaningful (AGA ≈ AGI, obviously on purpose), so in order to get in condition, I think it is important that your safety team takes part in this kind of paper.
Perhaps your team could have helped write the safety part?
I think it would be a bad use of our time to write the safety sections of all the papers that could be progress towards AGI (there are a lot of them). It seems a lot better to focus on generally improving knowledge of safety, and letting individual projects write their own safety sections.
Obviously if an actually x-risky system is being built it would be important for us to be involved but I think this was not particularly x-risky.
Tbc we would have been happy to chat to them if they reached out; I’m just saying that we wouldn’t want to do this for all of the AGI-related papers (and this one doesn’t seem particularly special such that we should pay special attention to it).
Or to deliberate whether the weights and code should be made public?
DeepMind generally doesn’t make weights and code public because it’s a huge hassle to do so (because our codebase is totally different from the codebases used outside of industry), so there isn’t much of a decision for us to weigh in on here.
(But also, I think we’d be more effective by working on a general policy for how to make these decisions, rather than focusing on individual cases, and indeed there is some work like that happening at DeepMind.)
Perhaps your team could have helped write the safety part?
Or to deliberate whether the weights and code should be made public?
The name of the paper is very meaningful (AGA ≈ AGI, obviously on purpose), so in order to get in condition, I think it is important that your safety team takes part in this kind of paper.
I think it would be a bad use of our time to write the safety sections of all the papers that could be progress towards AGI (there are a lot of them). It seems a lot better to focus on generally improving knowledge of safety, and letting individual projects write their own safety sections.
Obviously if an actually x-risky system is being built it would be important for us to be involved but I think this was not particularly x-risky.
Tbc we would have been happy to chat to them if they reached out; I’m just saying that we wouldn’t want to do this for all of the AGI-related papers (and this one doesn’t seem particularly special such that we should pay special attention to it).
DeepMind generally doesn’t make weights and code public because it’s a huge hassle to do so (because our codebase is totally different from the codebases used outside of industry), so there isn’t much of a decision for us to weigh in on here.
(But also, I think we’d be more effective by working on a general policy for how to make these decisions, rather than focusing on individual cases, and indeed there is some work like that happening at DeepMind.)