One comment : nuclear fission generated explosive bursts of energy and enormous increases in the amount of energy humans could release. (Destructively) Very likely the “megatons per year” growth rate was 30 percent some years on the 60s and 70s.
Yet if you moved the plot backward to 1880 and asked the most credible scientists alive the if we would find a way to do this, most would be skeptical and might argue that the increase in dynamite production with each year didn’t show 30 percent growth.
Nuclear fission is often compared to AI development, but I think it’s probably a bad comparison.
Nuclear bombs weren’t built by successively making bombs more efficient or bigger. Rather, they were a result of a particular quirk of physics works at the subatomic level. Once that quirk was understood, people quickly realized that it was possible, and governments began enriching uranium.
By contrast, machine learning and AI research builds on itself. Intelligence looks more like a continuum, and will be built from a culmination of ideas and knowledge. There are rarely large theoretical leaps made in the AI field; at least, none which lend themselves rapidly to practical applications. Most of the best ideas in AI usually have ancestry in precursor ideas which were almost as good. For example, almost all of the ideas in modern ML are downstream of probability theory, learning theory, and backpropagation, which were developed many decades ago.
The reason to compare it to fission is from self gain. For a fission reaction that quirk of physics is called criticality where the neutrons produced self amplify, leading to exponential gain. Up until sufficient fissionable material was concentrated (the Chicago pile) there was zero fission gain and you could say fission was only a theoretical possibility.
Today human beings design and participate in building AI software, AI computer chips, and robots for AI to drive. They also gather the resources to make these things.
The ‘quirk’ we expect to exploit here is that human minds are very limited in I/O and lifespan, and have many inefficiencies and biases. They are also millions of times slower than computer chips that already exist, at least for individual subsystems. They were designed by nature to handle far more limited domain problems than the ones we are faced with now, and thus we are bad at them.
The ‘quirk’ therefore is that if you can build a superior than a human mind but also as robust and broad in capabilities, and the physical materials are a small amount of refined silicon or carbon with small energy requirements (say a cube that is 10cm*10cm and requiring 1 kW) you can order those machines to self replicate, getting the equivalent of adding trillions of workers to our population without any of the needs or desires of those trillions of people.
This will obviously cause explosive economic growth. Will it be over 30% in a single year? No idea.
One comment : nuclear fission generated explosive bursts of energy and enormous increases in the amount of energy humans could release. (Destructively) Very likely the “megatons per year” growth rate was 30 percent some years on the 60s and 70s.
Yet if you moved the plot backward to 1880 and asked the most credible scientists alive the if we would find a way to do this, most would be skeptical and might argue that the increase in dynamite production with each year didn’t show 30 percent growth.
Nuclear fission is often compared to AI development, but I think it’s probably a bad comparison.
Nuclear bombs weren’t built by successively making bombs more efficient or bigger. Rather, they were a result of a particular quirk of physics works at the subatomic level. Once that quirk was understood, people quickly realized that it was possible, and governments began enriching uranium.
By contrast, machine learning and AI research builds on itself. Intelligence looks more like a continuum, and will be built from a culmination of ideas and knowledge. There are rarely large theoretical leaps made in the AI field; at least, none which lend themselves rapidly to practical applications. Most of the best ideas in AI usually have ancestry in precursor ideas which were almost as good. For example, almost all of the ideas in modern ML are downstream of probability theory, learning theory, and backpropagation, which were developed many decades ago.
The reason to compare it to fission is from self gain. For a fission reaction that quirk of physics is called criticality where the neutrons produced self amplify, leading to exponential gain. Up until sufficient fissionable material was concentrated (the Chicago pile) there was zero fission gain and you could say fission was only a theoretical possibility.
Today human beings design and participate in building AI software, AI computer chips, and robots for AI to drive. They also gather the resources to make these things.
The ‘quirk’ we expect to exploit here is that human minds are very limited in I/O and lifespan, and have many inefficiencies and biases. They are also millions of times slower than computer chips that already exist, at least for individual subsystems. They were designed by nature to handle far more limited domain problems than the ones we are faced with now, and thus we are bad at them.
The ‘quirk’ therefore is that if you can build a superior than a human mind but also as robust and broad in capabilities, and the physical materials are a small amount of refined silicon or carbon with small energy requirements (say a cube that is 10cm*10cm and requiring 1 kW) you can order those machines to self replicate, getting the equivalent of adding trillions of workers to our population without any of the needs or desires of those trillions of people.
This will obviously cause explosive economic growth. Will it be over 30% in a single year? No idea.