I’m not sure I personally endorse the model I’m proposing, but imagine a slightly less spherical AGI lab which has more than one incentive (profit maximization) driving its behavior. Maybe they care at least a little bit about not advancing the capabilities frontier as fast as possible. This can cause a preference ordering like:
don’t argmax capabilities, because there’s no open-source competition making it impossible to profit from current-gen models
argmax capabilities, since you need to stay ahead of open-source models nipping at your heels
don’t argmax capabilities; go bankrupt because open-source catches up to you (or gets “close enough” for enough of your customers)
ETA: But in practice most of my concerns around open-source AI development are elsewhere.
I think you are assuming something like a sublinear utility function in the difference (quality of own closed model—quality of best open model). Which would create an incentive to do just a bit better than the open model.
I think if there is a penalty term for advancing the frontier (say, for the quality of one’s released model minus the quality of the open model) that can be modeled as dividing the revenue by a constant factor (since, revenue was also proportional to that). Which shouldn’t change the general conclusion.
Yeah, there needs to be something like a nonlinearity somewhere. (Or just preference inconsistency, which humans are known for, to say nothing of larger organizations.)
I’m not sure I personally endorse the model I’m proposing, but imagine a slightly less spherical AGI lab which has more than one incentive (profit maximization) driving its behavior. Maybe they care at least a little bit about not advancing the capabilities frontier as fast as possible. This can cause a preference ordering like:
don’t argmax capabilities, because there’s no open-source competition making it impossible to profit from current-gen models
argmax capabilities, since you need to stay ahead of open-source models nipping at your heels
don’t argmax capabilities; go bankrupt because open-source catches up to you (or gets “close enough” for enough of your customers)
ETA: But in practice most of my concerns around open-source AI development are elsewhere.
I think you are assuming something like a sublinear utility function in the difference (quality of own closed model—quality of best open model). Which would create an incentive to do just a bit better than the open model.
I think if there is a penalty term for advancing the frontier (say, for the quality of one’s released model minus the quality of the open model) that can be modeled as dividing the revenue by a constant factor (since, revenue was also proportional to that). Which shouldn’t change the general conclusion.
Yeah, there needs to be something like a nonlinearity somewhere. (Or just preference inconsistency, which humans are known for, to say nothing of larger organizations.)