My guess is that training cutting edge models, and not releasing them is a pretty good play, or would have been, if there wasn’t huge AGI hype.
As it is, information about your models is going to leak, and in most cases the fact that something is possible is most of the secret to reverse engineering it (note: this might be true in the regime of transformer models, but it might not be true for other tasks or sub-problems).
But on the other hand, given the hype, people are going to try to do the things that you’re doing anyway, so maybe leaks about your capabilities don’t make that much difference?
This does point out an important consideration, which is “how much information needs to leak from your lab to enable someone else to replicate your results?”
It seems like, in many cases, there’s an obvious way to do some task, and the mere fact that you succeeded is enough info to recreate your result. But presumably there are cases, where you figure out a clever trick, and even if the evidence of your model’s performance leaks, that doesn’t tell the world how to do it (though it does cause maybe hundreds of smart people to start looking for how you did it, trying to discover how to do it themselves).
I think I should regard the situation differently depending on the status of that axis.
That’s the hard part.
My guess is that training cutting edge models, and not releasing them is a pretty good play, or would have been, if there wasn’t huge AGI hype.
As it is, information about your models is going to leak, and in most cases the fact that something is possible is most of the secret to reverse engineering it (note: this might be true in the regime of transformer models, but it might not be true for other tasks or sub-problems).
But on the other hand, given the hype, people are going to try to do the things that you’re doing anyway, so maybe leaks about your capabilities don’t make that much difference?
This does point out an important consideration, which is “how much information needs to leak from your lab to enable someone else to replicate your results?”
It seems like, in many cases, there’s an obvious way to do some task, and the mere fact that you succeeded is enough info to recreate your result. But presumably there are cases, where you figure out a clever trick, and even if the evidence of your model’s performance leaks, that doesn’t tell the world how to do it (though it does cause maybe hundreds of smart people to start looking for how you did it, trying to discover how to do it themselves).
I think I should regard the situation differently depending on the status of that axis.