I think I still believe the thing we initially wrote:
Agree with you that there might be strong incentives to sell stuff at monopoloy prices (and I’m worried about this). But if there’s a big gap, you can do this without selling your most advanced models. (You sell access to weaker models for a big mark up, and keep the most advanced ones to yourselves to help you further entrench your monopoly/your edge over any and all other actors.)
I’m sceptical of worlds where 5 similarly advanced AGI projects don’t bother to sell
Presumably any one of those could defect at any time and sell at a decent price. Why doesn’t this happen?
Eventually they need to start making revenue, right? They can’t just exist on investment forever
(I am also not an economist though and interested in pushback.)
I think I agree with your original statement now. It still feels slightly misleading though, as while ‘keeping up with the competition’ won’t provide the motivation (as there putatively is no competition), there will still be strong incentives to sell at any capability level. (And as you say this may be overcome by an even stronger incentive to hoard frontier intelligence for their own R&D and strategising use. But this outweighs rather than annuls the direct economic incentive to make a packet of money by selling access to your latest system.)
I agree the ‘5 projects but no selling AI services’ world is moderately unlikely, the toy version of it I have in mind is something like:
It costs $10 million to set up a misuse monitoring team, API infrastructure and help manuals, a web interface, etc in up-front costs to start selling access to your AI model.
If you are the only company to do this, you make $100 million at monopoly prices.
But if multiple companies do this, the price gets driven down to marginal inference costs, and you make ~$0 in profits and just lose the initial $10 million in fixed costs.
So all the companies would prefer to be the only one selling, but second-best is for no-one to sell, and worst is for multiple companies to sell.
Even without explicit collusion, they could all realise it is not worth selling (but worth punishing anyone who defects).
This seems unlikely to me because:
Maybe the up-front costs of at least a kind of scrappy version are actually low.
Consumers lack information nd aren’t fully rational, so the first company to start selling would have an advantage (OpenAI with ChatGPT in this case, even after Claude became as good or better).
Empirically, we don’t tend to see an equilibrium of no company offering a service that it would be profitable for one company to offer.
So actually maybe it is sufficiently unlikely not to bother with much. There seems to be some slim theoretical world where it happens though.
I think I still believe the thing we initially wrote:
Agree with you that there might be strong incentives to sell stuff at monopoloy prices (and I’m worried about this). But if there’s a big gap, you can do this without selling your most advanced models. (You sell access to weaker models for a big mark up, and keep the most advanced ones to yourselves to help you further entrench your monopoly/your edge over any and all other actors.)
I’m sceptical of worlds where 5 similarly advanced AGI projects don’t bother to sell
Presumably any one of those could defect at any time and sell at a decent price. Why doesn’t this happen?
Eventually they need to start making revenue, right? They can’t just exist on investment forever
(I am also not an economist though and interested in pushback.)
I think I agree with your original statement now. It still feels slightly misleading though, as while ‘keeping up with the competition’ won’t provide the motivation (as there putatively is no competition), there will still be strong incentives to sell at any capability level. (And as you say this may be overcome by an even stronger incentive to hoard frontier intelligence for their own R&D and strategising use. But this outweighs rather than annuls the direct economic incentive to make a packet of money by selling access to your latest system.)
I agree the ‘5 projects but no selling AI services’ world is moderately unlikely, the toy version of it I have in mind is something like:
It costs $10 million to set up a misuse monitoring team, API infrastructure and help manuals, a web interface, etc in up-front costs to start selling access to your AI model.
If you are the only company to do this, you make $100 million at monopoly prices.
But if multiple companies do this, the price gets driven down to marginal inference costs, and you make ~$0 in profits and just lose the initial $10 million in fixed costs.
So all the companies would prefer to be the only one selling, but second-best is for no-one to sell, and worst is for multiple companies to sell.
Even without explicit collusion, they could all realise it is not worth selling (but worth punishing anyone who defects).
This seems unlikely to me because:
Maybe the up-front costs of at least a kind of scrappy version are actually low.
Consumers lack information nd aren’t fully rational, so the first company to start selling would have an advantage (OpenAI with ChatGPT in this case, even after Claude became as good or better).
Empirically, we don’t tend to see an equilibrium of no company offering a service that it would be profitable for one company to offer.
So actually maybe it is sufficiently unlikely not to bother with much. There seems to be some slim theoretical world where it happens though.