Despite the ‘proof’ being public the scaling hypothesis remained a Thielian secret until recently. It took chat-GPT to convince Facebook/Meta AI research that scaled-up prosaic AI was a plausible path to beyond human intelligence. They recently had internal meetings sounding the alarm. Deepmind also believed that a significant amount of fundamental advances would be needed. Sadly this is not the case. The scaling curves don’t bend anywhere near fast enough. I highly recommend reading about how many ‘bees’ worth of synpases/neurons various animals and LLMs have. For example “Davinci is a 175-bee model, which gets us up to hedgehog (or quail) scale. Amusingly, PaLM, at 540 bees, has about as many parameters as a chinchilla has synapses”
Here is Anthropic’s version of events
In 2019, several members of what was to become the founding Anthropic team made this idea precise by developing scaling laws for AI, demonstrating that you could make AIs smarter in a predictable way, just by making them larger and training them on more data. Justified in part by these results, this team led the effort to train GPT-3, arguably the first modern “large” language model2, with over 173B parameters.
Since the discovery of scaling laws, many of us at Anthropic have believed that very rapid AI progress was quite likely. However, back in 2019, it seemed possible that multimodality, logical reasoning, speed of learning, transfer learning across tasks, and long-term memory might be “walls” that would slow or halt the progress of AI. In the years since, several of these “walls”, such as multimodality and logical reasoning, have fallen. Given this, most of us have become increasingly convinced that rapid AI progress will continue rather than stall or plateau. AI systems are now approaching human level performance on a large variety of tasks, and yet training these systems still costs far less than “big science” projects like the Hubble Space Telescope or the Large Hadron Collider – meaning that there’s a lot more room for further growth
Imo less formalized version of the scaling hypothesis/laws were known well before 2019. Recent better understanding of scaling laws should not make us much less bullish on ai progress. Though perhaps there is hope we ‘run out of data’ before AI becomes too dangerous. I would not bank on this.
As long as the scaling hypothesis remained a secret* doing anything that risked convincing more people of the scaling hypothesis was extremely risky, and imo very unethical. It might have seemed from the inside view that either the ‘secret’ was obvious or OpenAI was going to convince everyone of the secret anyway. But until recently there was hope OpenAI would eventually stop being so reckless. But at this point the secret is pretty much out. An AI arms race has been triggered and existing actors will shortly convince many of the remaining important doubters. This will soon include the government of [insert country you don’t like].
EAs systematically doubt the dangers posed by aligned AI. There is not much reason to assume that just because an AI system is aligned to the goals of some humans it will be good for humanity as a whole. Of course, many AIs are focused on ‘at least not everyone dies’ as a wincon but I would hope for a better future. If AI is extremely hard to align the current game-board is just not too unlikely to be winnable, time is running out fast. It is probably still bad to work on capabilities at top labs like Anthropic or OpenAI. If you happen to make a big advance, like switching to relu, you will burn a huge amount of timeline. But working on cool AI projects now seems positive to me.
Previously any cool project unacceptably risked convincing even more people of the scaling hypothesis. But if the secret is out it seems worthwhile to try to steer AI in a positive direction. This has been a very big update for me. Until very recently I promoted a hardline stance of ‘absolutely do not work on AI’. But now we might as well play to our out of ‘AI isn’t that hard to align’ and work on steering toward a brighter, based future.
Just Pivot to AI: The secret is out
Despite the ‘proof’ being public the scaling hypothesis remained a Thielian secret until recently. It took chat-GPT to convince Facebook/Meta AI research that scaled-up prosaic AI was a plausible path to beyond human intelligence. They recently had internal meetings sounding the alarm. Deepmind also believed that a significant amount of fundamental advances would be needed. Sadly this is not the case. The scaling curves don’t bend anywhere near fast enough. I highly recommend reading about how many ‘bees’ worth of synpases/neurons various animals and LLMs have. For example “Davinci is a 175-bee model, which gets us up to hedgehog (or quail) scale. Amusingly, PaLM, at 540 bees, has about as many parameters as a chinchilla has synapses”
Here is Anthropic’s version of events
Imo less formalized version of the scaling hypothesis/laws were known well before 2019. Recent better understanding of scaling laws should not make us much less bullish on ai progress. Though perhaps there is hope we ‘run out of data’ before AI becomes too dangerous. I would not bank on this.
As long as the scaling hypothesis remained a secret* doing anything that risked convincing more people of the scaling hypothesis was extremely risky, and imo very unethical. It might have seemed from the inside view that either the ‘secret’ was obvious or OpenAI was going to convince everyone of the secret anyway. But until recently there was hope OpenAI would eventually stop being so reckless. But at this point the secret is pretty much out. An AI arms race has been triggered and existing actors will shortly convince many of the remaining important doubters. This will soon include the government of [insert country you don’t like].
EAs systematically doubt the dangers posed by aligned AI. There is not much reason to assume that just because an AI system is aligned to the goals of some humans it will be good for humanity as a whole. Of course, many AIs are focused on ‘at least not everyone dies’ as a wincon but I would hope for a better future. If AI is extremely hard to align the current game-board is just not too unlikely to be winnable, time is running out fast. It is probably still bad to work on capabilities at top labs like Anthropic or OpenAI. If you happen to make a big advance, like switching to relu, you will burn a huge amount of timeline. But working on cool AI projects now seems positive to me.
Previously any cool project unacceptably risked convincing even more people of the scaling hypothesis. But if the secret is out it seems worthwhile to try to steer AI in a positive direction. This has been a very big update for me. Until very recently I promoted a hardline stance of ‘absolutely do not work on AI’. But now we might as well play to our out of ‘AI isn’t that hard to align’ and work on steering toward a brighter, based future.