Yes. Commoditize-your-complement dynamics do not come with any set number. They can justify an expense of thousands of dollars, or of billions—it all depends on the context. If you are in a big enough industry, and the profits at stake are large enough, and the investment in question is critical enough, you can justify any number as +EV. (Think of it less as ‘investment’ and more as ‘buying insurance’. Facebook’s META market cap is worth ~$1,230 billion right now; how much insurance should its leaders buy against the periodic emergences of new platforms or possible paradigm shifts? Definitely at least in the single billions, one would think...)
And investments of $10m are highly routine and ordinary, and people have already released weights (note: most of these AI releases are not ‘open source’, including Llama-3) for models with easily $10m of investment before. (Given that a good ML researcher-engineer could have a fully-loaded cost of $1m/year, if you have a small team of 10 and they release a model per year, then you already hit $10m spent the first year.) Consider Linux: if you wanted to make a Linux kernel replacement, which has been tested in battle and supported as many things as it does etc, today, that would probably cost you at least $10 billion, and the creation of Linux has been principally bankrolled by many companies collectively paying for development (for a myriad of reasons and ways). Or consider Android Linux. (Or go through my list and think about how much money it must take to do things like RISC-V.)
If Zuckerberg feels that LLMs are enough of a threat to the Facebook advertising model or creating a new social media which could potentially supersede Facebook (like Instagram and Whatsapp were), then he certainly could justify throwing a billion dollars of compute at a weights release in order to shatter the potential competition into a commoditized race-to-the-bottom. (He’s already blown much, much more on VR.)
The main prediction, I think, of commoditize-your-complement is that there is not much benefit to creating the leading-edge model or surpassing the SOTA by a lot. Your motivation is to release the cheapest model which serves as a spoiler model. So Llama-3 doesn’t have to be better than GPT-4 to spoil the market for OA: it just needs to be ‘good enough’. If you can do that by slightly beating GPT-4, then great. (But there’s no appetite to do some amazing moonshot far surpassing SOTA.)
However, because LLMs are moving so fast, this isn’t necessarily too useful to point out: Zuckerberg’s goal with Llama-3 is not to spoil GPT-4 (which has already been accomplished by Claude-3 and Databricks and some others, I think), but to spoil GPT-5 as well as Claude-4 and unknown competitors. You have to skate to where the puck will be because if you wait for GPT-5 to fully come out before you start spinning up your comoditizer model, your teams will have staled, infrastructure rotted, you’ll lose a lot of time, and who knows what will happen with GPT-5 before you finally catch up.
The real killer of Facebook investment would be the threat disappearing and permanent commoditization setting in, perhaps by LLMs sigmoiding hard and starting to look like a fad like 3D TVs. For example, if GPT-5 came out and it was barely distinguishable from GPT-4 and nothing else impressive happened and “DL hit a wall” at long last, then Llama-4 would probably still happen at full strength—since Zuck already bought all those GPUs—but then I would expect a Llama-5 to be much less impressive and be coasting on fumes and not receive another 10 or 100x scaleup, and Facebook DL R&D would return to normal conditions.
Yes. Commoditize-your-complement dynamics do not come with any set number. They can justify an expense of thousands of dollars, or of billions—it all depends on the context. If you are in a big enough industry, and the profits at stake are large enough, and the investment in question is critical enough, you can justify any number as +EV. (Think of it less as ‘investment’ and more as ‘buying insurance’. Facebook’s META market cap is worth ~$1,230 billion right now; how much insurance should its leaders buy against the periodic emergences of new platforms or possible paradigm shifts? Definitely at least in the single billions, one would think...)
And investments of $10m are highly routine and ordinary, and people have already released weights (note: most of these AI releases are not ‘open source’, including Llama-3) for models with easily $10m of investment before. (Given that a good ML researcher-engineer could have a fully-loaded cost of $1m/year, if you have a small team of 10 and they release a model per year, then you already hit $10m spent the first year.) Consider Linux: if you wanted to make a Linux kernel replacement, which has been tested in battle and supported as many things as it does etc, today, that would probably cost you at least $10 billion, and the creation of Linux has been principally bankrolled by many companies collectively paying for development (for a myriad of reasons and ways). Or consider Android Linux. (Or go through my list and think about how much money it must take to do things like RISC-V.)
If Zuckerberg feels that LLMs are enough of a threat to the Facebook advertising model or creating a new social media which could potentially supersede Facebook (like Instagram and Whatsapp were), then he certainly could justify throwing a billion dollars of compute at a weights release in order to shatter the potential competition into a commoditized race-to-the-bottom. (He’s already blown much, much more on VR.)
The main prediction, I think, of commoditize-your-complement is that there is not much benefit to creating the leading-edge model or surpassing the SOTA by a lot. Your motivation is to release the cheapest model which serves as a spoiler model. So Llama-3 doesn’t have to be better than GPT-4 to spoil the market for OA: it just needs to be ‘good enough’. If you can do that by slightly beating GPT-4, then great. (But there’s no appetite to do some amazing moonshot far surpassing SOTA.)
However, because LLMs are moving so fast, this isn’t necessarily too useful to point out: Zuckerberg’s goal with Llama-3 is not to spoil GPT-4 (which has already been accomplished by Claude-3 and Databricks and some others, I think), but to spoil GPT-5 as well as Claude-4 and unknown competitors. You have to skate to where the puck will be because if you wait for GPT-5 to fully come out before you start spinning up your comoditizer model, your teams will have staled, infrastructure rotted, you’ll lose a lot of time, and who knows what will happen with GPT-5 before you finally catch up.
The real killer of Facebook investment would be the threat disappearing and permanent commoditization setting in, perhaps by LLMs sigmoiding hard and starting to look like a fad like 3D TVs. For example, if GPT-5 came out and it was barely distinguishable from GPT-4 and nothing else impressive happened and “DL hit a wall” at long last, then Llama-4 would probably still happen at full strength—since Zuck already bought all those GPUs—but then I would expect a Llama-5 to be much less impressive and be coasting on fumes and not receive another 10 or 100x scaleup, and Facebook DL R&D would return to normal conditions.
EDIT: see https://thezvi.wordpress.com/2024/04/22/on-llama-3-and-dwarkesh-patels-podcast-with-zuckerberg/