Yes, when we are getting really close to AGI it will be good for the leading contenders to share info with each other. Even then it won’t be a good idea for the leading contenders to publish publicly, because then there’ll be way more contenders! And now, when we are not really close to AGI, public publication accelerates research in general and thus shortens timelines, while also bringing more actors into the race.
Trust between partners do not happen overnight. You don’t suddenly begin sharing information with concurrents when the prize is in sight. We need a history of shared information to build upon, and now—when, as you said, AGI is not really close—is the good time to build it. Because if you don’t trust someone with GPT-3, you are certainly not going to trust them with an AGI.
Because if you don’t trust someone with GPT-3, you are certainly not going to trust them with an AGI.
Choosing to not release GPT-3′s weights to the whole world doesn’t imply that you don’t trust DeepMind or Anthropic or whoever. It just implies that there exists at least one person in the world you don’t trust.
I agree that releasing everything publicly would make it easier/more likely to release crucial things to key competitors when the time comes. Alas, the harms are big enough to outweigh this benefit, I think.
Yes, when we are getting really close to AGI it will be good for the leading contenders to share info with each other. Even then it won’t be a good idea for the leading contenders to publish publicly, because then there’ll be way more contenders! And now, when we are not really close to AGI, public publication accelerates research in general and thus shortens timelines, while also bringing more actors into the race.
Trust between partners do not happen overnight. You don’t suddenly begin sharing information with concurrents when the prize is in sight. We need a history of shared information to build upon, and now—when, as you said, AGI is not really close—is the good time to build it.
Because if you don’t trust someone with GPT-3, you are certainly not going to trust them with an AGI.
Choosing to not release GPT-3′s weights to the whole world doesn’t imply that you don’t trust DeepMind or Anthropic or whoever. It just implies that there exists at least one person in the world you don’t trust.
I agree that releasing everything publicly would make it easier/more likely to release crucial things to key competitors when the time comes. Alas, the harms are big enough to outweigh this benefit, I think.