I think you are assuming something like a sublinear utility function in the difference (quality of own closed model—quality of best open model). Which would create an incentive to do just a bit better than the open model.
I think if there is a penalty term for advancing the frontier (say, for the quality of one’s released model minus the quality of the open model) that can be modeled as dividing the revenue by a constant factor (since, revenue was also proportional to that). Which shouldn’t change the general conclusion.
Yeah, there needs to be something like a nonlinearity somewhere. (Or just preference inconsistency, which humans are known for, to say nothing of larger organizations.)
I think you are assuming something like a sublinear utility function in the difference (quality of own closed model—quality of best open model). Which would create an incentive to do just a bit better than the open model.
I think if there is a penalty term for advancing the frontier (say, for the quality of one’s released model minus the quality of the open model) that can be modeled as dividing the revenue by a constant factor (since, revenue was also proportional to that). Which shouldn’t change the general conclusion.
Yeah, there needs to be something like a nonlinearity somewhere. (Or just preference inconsistency, which humans are known for, to say nothing of larger organizations.)