One thing I wonder: does this reduce the effective spending a company would be willing to make on a large model, given the likelihood of any competitive advantage it’d lend being eroded via cybertheft?
Could this go some of the way to explaining why we don’t get billion-dollar model runs, as opposed to engineering-heavy research which is naturally more distributed and harder to steal?
I’d expect companies to mitigate the risk of model theft with fairly affordable insurance. Movie studios and software companies invest hundreds of millions of dollars into individual easily copy-able MPEGs and executable files. Billion-dollar models probably don’t meet the risk/reward criteria yet. When a $100M model is human-level AGI it will almost certainly be worth the risk of training a $1B model.
This could mitigate financial risk to the company but I don’t think anyone will sell existential risk insurance, or that it would be effective if they did
One thing I wonder: does this reduce the effective spending a company would be willing to make on a large model, given the likelihood of any competitive advantage it’d lend being eroded via cybertheft? Could this go some of the way to explaining why we don’t get billion-dollar model runs, as opposed to engineering-heavy research which is naturally more distributed and harder to steal?
I’d expect companies to mitigate the risk of model theft with fairly affordable insurance. Movie studios and software companies invest hundreds of millions of dollars into individual easily copy-able MPEGs and executable files. Billion-dollar models probably don’t meet the risk/reward criteria yet. When a $100M model is human-level AGI it will almost certainly be worth the risk of training a $1B model.
This could mitigate financial risk to the company but I don’t think anyone will sell existential risk insurance, or that it would be effective if they did