Whereas the Regulated Source model would aim to compell responsible parties to share access to AI development products in an effort to counteract race dynamics and the centralization of power.
I guess my issue is that I expect regulated source to effectively become open-source because someone will leak it like they leaked LlaMa.
I think it could be possible to introduce more stringent security measures. We can generally keep important private keys from being leaked so if we treat weights carefully, we should be able to have at least a similar track record. We can also forbid the unregulated use of such software similar to the unregulated use of nuclear technology. Also in the limit, the problem still exists in a closed source world.
Llama is a special case because there are no societal incentives against it spreading… the opposite is the case! Because it was “proprietary”, it’s the magic secret sauce that everyone wants to stay afloat and in the race. In such an environment it’s clear that leaking or selling out is just a matter of time. I am trying to advocate a paradigm shift where we turn work on AI into a regulated industry shaped for the benefit of all rather than driven by profit maximization.
I think my intuition would be the opposite… The more room for profit, the more incentives for race dynamics and irresponsible gold rushing. Why would you think it’s the other way around?
I guess my issue is that I expect regulated source to effectively become open-source because someone will leak it like they leaked LlaMa.
I think it could be possible to introduce more stringent security measures. We can generally keep important private keys from being leaked so if we treat weights carefully, we should be able to have at least a similar track record. We can also forbid the unregulated use of such software similar to the unregulated use of nuclear technology. Also in the limit, the problem still exists in a closed source world.
Llama is a special case because there are no societal incentives against it spreading… the opposite is the case! Because it was “proprietary”, it’s the magic secret sauce that everyone wants to stay afloat and in the race. In such an environment it’s clear that leaking or selling out is just a matter of time. I am trying to advocate a paradigm shift where we turn work on AI into a regulated industry shaped for the benefit of all rather than driven by profit maximization.
I’m not a fan of profit maximisation either.
Although I’m much more concerned about the potential for us to lose control then for a particular corporation to make a bit too much profit.
I think my intuition would be the opposite… The more room for profit, the more incentives for race dynamics and irresponsible gold rushing. Why would you think it’s the other way around?