My favorite example, though it might still be falsified. Google invented transformers, owns DeepMind, runs their own data centres, builds their own accelerators and have huge amounts of them, have tons of hard to get data (all those books they scanned before that became not okay to do), insane distribution channels with gmail, docs, sheets, and millions of smartphones sold a year, more cash than God, they literally run Google search (and didn’t build it from scratch like Perplexity!).. I almost struggle to mention any on-paper advantage they don’t have in the AGI race… they’re positioned for an earth-shattering vertical integration play… and yet they’re behind OpenAI and Anthropic.
They have “galaxy brains”, but applying those galaxy brains strategically well towards your goals is also an aspect of intelligence. Additionally, those “galaxy brains” may be ineffective because of issues with alignment towards the company, whereas in a startup often you can get 10x or 100x more out of fewer employees because they have equity and understand that failure is existential for them. Demis may be smart, but he made a major strategic error if his goal was to lead in the AGI race, and despite the fact that the did he is still running DeepMind, which suggests an alignment/incentive issue with regards to Google’s short term objectives.
OpenAI is worth about 150 billion dollars and has the backing of microsoft.
Google gemini is apparently competitive now with Claude and gpt4.
Yes google was sleeping on LLMs two years ago and OpenAI is a little ahead but this moat is tiny.
Currently big companies struggle to hire and correctly promote talent for the reasons discussed in my post, whereas AI talent will be easier to find/hire/replicate given only capital & legible info
To the extent that AI ability scales with resources (potentially boosted by inference-time compute, and if SOTA models are no longer available to the public), then better-resourced actors have better galaxy brains
Superhuman intelligence and organisational ability in AIs will mean less bureaucratic rot and communication bandwidth problems in large orgs, compared to orgs made out of human brain -sized chunks, reducing the costs of scale
Imagine for example the world where software engineering is incredibly cheap. You can start a software company very easily, yes, but Google can monitor the web for any company that makes revenue off of software, instantly clone the functionality (because software engineering is just a turn-the-crank-on-the-LLM thing now) and combine it with their platform advantage and existing products and distribution channels. Whereas right now, it would cost Google a lot of precious human time and focus to try to even monitor all the developing startups, let alone launch a competing product for each one. Of course, it might be that Google itself is too bureaucratic and slow to ever do this, but someone else will then take this strategy.
C.f. the oft-quoted thing about how the startup challenge is getting to distribution before the incumbents get to distribution. But if the innovation is engineering, and the engineering is trivial, how do you get time to get distribution right?
(Interestingly, as I’m describing it above the most key thing is not so much capital intensivity, and more just that innovation/engineering is no longer a source of differential advantage because everyone can do it with their AIs really well)
There’s definitely a chance that there’s some “crack” in this, either from the economics or the nature of AI performance or some interaction. In particular, as I mentioned at the end, I don’t think modelling the AI as an approaching blank wall of complete perfect intelligence all-obsoleting intelligence is the right model for short to medium -term dynamics. Would be very curious if you have thoughts.
It is false today that big companies with 10x the galaxy brains and 100x the capital reliably outperform upstarts.[1]
Why would this change? I don’t think you make the case.
My favorite example, though it might still be falsified. Google invented transformers, owns DeepMind, runs their own data centres, builds their own accelerators and have huge amounts of them, have tons of hard to get data (all those books they scanned before that became not okay to do), insane distribution channels with gmail, docs, sheets, and millions of smartphones sold a year, more cash than God, they literally run Google search (and didn’t build it from scratch like Perplexity!).. I almost struggle to mention any on-paper advantage they don’t have in the AGI race… they’re positioned for an earth-shattering vertical integration play… and yet they’re behind OpenAI and Anthropic.
There are more clear-cut examples than this.
They have “galaxy brains”, but applying those galaxy brains strategically well towards your goals is also an aspect of intelligence. Additionally, those “galaxy brains” may be ineffective because of issues with alignment towards the company, whereas in a startup often you can get 10x or 100x more out of fewer employees because they have equity and understand that failure is existential for them. Demis may be smart, but he made a major strategic error if his goal was to lead in the AGI race, and despite the fact that the did he is still running DeepMind, which suggests an alignment/incentive issue with regards to Google’s short term objectives.
OpenAI is worth about 150 billion dollars and has the backing of microsoft. Google gemini is apparently competitive now with Claude and gpt4. Yes google was sleeping on LLMs two years ago and OpenAI is a little ahead but this moat is tiny.
For example:
Currently big companies struggle to hire and correctly promote talent for the reasons discussed in my post, whereas AI talent will be easier to find/hire/replicate given only capital & legible info
To the extent that AI ability scales with resources (potentially boosted by inference-time compute, and if SOTA models are no longer available to the public), then better-resourced actors have better galaxy brains
Superhuman intelligence and organisational ability in AIs will mean less bureaucratic rot and communication bandwidth problems in large orgs, compared to orgs made out of human brain -sized chunks, reducing the costs of scale
Imagine for example the world where software engineering is incredibly cheap. You can start a software company very easily, yes, but Google can monitor the web for any company that makes revenue off of software, instantly clone the functionality (because software engineering is just a turn-the-crank-on-the-LLM thing now) and combine it with their platform advantage and existing products and distribution channels. Whereas right now, it would cost Google a lot of precious human time and focus to try to even monitor all the developing startups, let alone launch a competing product for each one. Of course, it might be that Google itself is too bureaucratic and slow to ever do this, but someone else will then take this strategy.
C.f. the oft-quoted thing about how the startup challenge is getting to distribution before the incumbents get to distribution. But if the innovation is engineering, and the engineering is trivial, how do you get time to get distribution right?
(Interestingly, as I’m describing it above the most key thing is not so much capital intensivity, and more just that innovation/engineering is no longer a source of differential advantage because everyone can do it with their AIs really well)
There’s definitely a chance that there’s some “crack” in this, either from the economics or the nature of AI performance or some interaction. In particular, as I mentioned at the end, I don’t think modelling the AI as an approaching blank wall of complete perfect intelligence all-obsoleting intelligence is the right model for short to medium -term dynamics. Would be very curious if you have thoughts.