Enrico Fermi originally asked in 1950, “Where are the aliens? Now, in 2024, it is becoming more of a question of: “Where are the AGIs?” Essentially, we have the computing power (the 2011 Watson computer could certainly have trained LaMDA and other large language models; the worlds most powerful supercomputer could almost certainly do direct brain emulation, ( [https://www.lesswrong.com/posts/9kvpdK9BLSMxGnxjk/thoughts-on-hardware-limits-to-prevent-agi] ) so it seems like it is time to ask where are all the Artificial General Intelligences?
Some possible explanations for why we have not seen any:
We haven’t managed to program them yet. This is certainly possible.
AGI(s) exist, but are hiding. This could be because the AGI is waiting to become powerful enough to safely revel verself, or the AGI has other reasons such as to avoid causing panic. It is also possible that we have noticed the AGI, but haven’t realized that it is an AGI (for example, we might think the AGI is just a botnet.).
An External entity is preventing AGIs. This could including a AGI that is hiding, but doesn’t want competition or extraterrestrials could be preventing an AGI because it might be dangerous, or even that we exist in a simulation that doesn’t want to run the amount of compute needed for an AGI.
AGIs are sufficiently deadly that survivership bias means we only exist in worlds where AGI has failed. If AGIs are almost always deadly, then we would not expect to be on a world where they happen, since the AGI would wipe us out before we observe this.
It is possible that Computers are not powerful enough for AGI, or at least not powerful enough to easily create an AGI.
I think those five are the main possibilities: 1. not programmed yet, 2. hiding, 3. being prevented, 4. almost always deadly and 5, computers not powerful enough.
Here are some more quick thoughts on computing power. Here is a somewhat arbitrary list of different computers, from less powerful to more powerful with RAM and number of floating point operations per second listed. Each is about 1000 times more powerful than the previous.
Commodore 64 (64 KiB, 25 kFLOPS)
Cray 1 (8 MiB, 160 MFLOPS)
RaspberryPi 4B (4 GiB, 13.5 GFLOPS)
Watson (16 TiB, 80 TFLOPS)
Frontier supercomputer (9 PiB, 1.2 EFLOPS)
For (1) it is probably impossible to build an AGI on a single Commodore 64, just because it is easy to find problems that would not fit in the memory. For (2) the Cray has enough memory to do things like store the smallest known free-living bacterium DNA so it would be harder to prove that an AGI could not be built, but still, this is roughly the order of computing power that a Fruit Fly has and we have had these for nearly a half a century, so it seems unlikely that an AGI can easily be created with one. For (3), it is possible to run GPT-3 level large language models on it, so it seems like running an AGI on a Raspberry Pi 4B is probably possible with clever enough programming. Note that I don’t think that traditional “Attention is all you need” LLMs are AGIs, but they are exhibiting enough intelligentish behavior that it seems hard to argue that the same amount of computing power needed to train and run an LLM is incapable of actually intelligent behavior. For (4), this would be capable of training a GPT-3 level large language model, so it seems like the 2011 Watson computer probably could run an AGI. For (5), Frontier can probably directly emulate a human brain, so it seems rather likely that it is only a matter running the right algorithms on Frontier to have an AGI.
AGI Fermi Paradox
Enrico Fermi originally asked in 1950, “Where are the aliens? Now, in 2024, it is becoming more of a question of: “Where are the AGIs?” Essentially, we have the computing power (the 2011 Watson computer could certainly have trained LaMDA and other large language models; the worlds most powerful supercomputer could almost certainly do direct brain emulation, ( [https://www.lesswrong.com/posts/9kvpdK9BLSMxGnxjk/thoughts-on-hardware-limits-to-prevent-agi] ) so it seems like it is time to ask where are all the Artificial General Intelligences?
Some possible explanations for why we have not seen any:
We haven’t managed to program them yet. This is certainly possible.
AGI(s) exist, but are hiding. This could be because the AGI is waiting to become powerful enough to safely revel verself, or the AGI has other reasons such as to avoid causing panic. It is also possible that we have noticed the AGI, but haven’t realized that it is an AGI (for example, we might think the AGI is just a botnet.).
An External entity is preventing AGIs. This could including a AGI that is hiding, but doesn’t want competition or extraterrestrials could be preventing an AGI because it might be dangerous, or even that we exist in a simulation that doesn’t want to run the amount of compute needed for an AGI.
AGIs are sufficiently deadly that survivership bias means we only exist in worlds where AGI has failed. If AGIs are almost always deadly, then we would not expect to be on a world where they happen, since the AGI would wipe us out before we observe this.
It is possible that Computers are not powerful enough for AGI, or at least not powerful enough to easily create an AGI.
I think those five are the main possibilities: 1. not programmed yet, 2. hiding, 3. being prevented, 4. almost always deadly and 5, computers not powerful enough.
Here are some more quick thoughts on computing power. Here is a somewhat arbitrary list of different computers, from less powerful to more powerful with RAM and number of floating point operations per second listed. Each is about 1000 times more powerful than the previous.
Commodore 64 (64 KiB, 25 kFLOPS)
Cray 1 (8 MiB, 160 MFLOPS)
RaspberryPi 4B (4 GiB, 13.5 GFLOPS)
Watson (16 TiB, 80 TFLOPS)
Frontier supercomputer (9 PiB, 1.2 EFLOPS)
For (1) it is probably impossible to build an AGI on a single Commodore 64, just because it is easy to find problems that would not fit in the memory. For (2) the Cray has enough memory to do things like store the smallest known free-living bacterium DNA so it would be harder to prove that an AGI could not be built, but still, this is roughly the order of computing power that a Fruit Fly has and we have had these for nearly a half a century, so it seems unlikely that an AGI can easily be created with one. For (3), it is possible to run GPT-3 level large language models on it, so it seems like running an AGI on a Raspberry Pi 4B is probably possible with clever enough programming. Note that I don’t think that traditional “Attention is all you need” LLMs are AGIs, but they are exhibiting enough intelligentish behavior that it seems hard to argue that the same amount of computing power needed to train and run an LLM is incapable of actually intelligent behavior. For (4), this would be capable of training a GPT-3 level large language model, so it seems like the 2011 Watson computer probably could run an AGI. For (5), Frontier can probably directly emulate a human brain, so it seems rather likely that it is only a matter running the right algorithms on Frontier to have an AGI.