I’m a bit skeptical about whether you can actually create a superintelligent AI by combining sped up humans like that,
Why not? You are pretty smart, and all you are is a combination of 10^11 or so very “dumb” neurons.
Now imagine a “being” which is actually a very large number of human-level intelligences, all interacting...
Yeah, that didn’t came out as clear as it was in my head. If you have access to a large number of suitable less intelligent entities there is no reason you couldn’t combine them into a single, more intelligent entity. The problem I see is about the computational resources required to do so. Some back of the envelope math:
I vaguely remember reading that with current supercomputers we can simulate a cat brain at 1% speed, even if this isn’t accurate (anymore) it’s probably still a good enough place to start. You mention running the simulation for a million years simulated time, let’s assume that we can let the simulation run for a year rather than seconds, that is still 8 orders of magnitude faster than the simulated cat.
But we’re not interested in what a really fast cat can do, we need human level intelligence. According to a quick wiki search, a human brain contains about 100 times as many neurons as a cat brain. If we assume that this scales linearly (which it probably doesn’t) that’s another 2 orders of magnitude.
I don’t know how many orcs you had in mind for this scenario, but let’s assume a million (this is a lot less humans than it took in real life before mathematics took off, but presumably this world is more suited for mathematics to be invented), that is yet another 6 orders of magnitude of processing power that we need.
Putting it all together, we would need a computer that has at least 10^16 times more processing power than modern supercomputers. Granted, that doesn’t take into account a number of simplifications that could be build into the system, but it also doesn’t take into account the other parts of the simulated environment that require processing power. Now I don’t doubt that computers are going to get faster in the future, but 10 quadrillion times faster? It seems to me that by the time we can do that, we should have figured out a better way to create AI.
Here is my attempt at a calculation. Disclaimer: this is based on googling. If you are actually knowledgeable in the subject, please step in and set me right.
There are 10^11 neurons in the human brain.
A neuron will fire about 200 times per second.
It should take a constant number of flops to decide whether a neuron will fire
—say 10 flops (no need to solve a differential equation, neural networks usually
use some discrete heuristics for something like this)
I want a society of 10^6 orcs running for 10^6 years
As you suggest, lets let the simulation run for a year of real time (moving away at this point from my initial suggestion of 1 second). By my calculations, it seems that in order for this to happen we need a computer
that does 2x10^25 flops per second.
...in 2018 we will have a supercomputer that does about 2x10^17 flops per second.
That means we need a computer that is one hundred million times faster than the best computer in 2018.
That is still quite a lot, of course. If Moore’s law was ongoing, this would take ~40 years; but Moore’s law is dying. Still, it is not outside the realm of possibility for, say, the next 100 years.
Edit: By the way, one does not need to literally implement what I suggested—the scheme I suggested is in principle applicable whenever you have a superintelligence, regardless of how it was designed.
Indeed, if we somehow develop an above-human intelligence, rather than trying to make sure its goals are aligned with ours, we might instead let it loose within a simulated world, giving it a preference for continued survival. Just one superintelligence thinking about factoring for a few thousand simulated years would likely be enough to let us factor any number we want. We could even give it have in-simulation ways of modifying its own code.
I think this calculation too conservative. The reason is (as I understand it) that neurons are governed by various differential equations, and simulating them accurately is a pain in the ass. We should instead assume that deciding whether a neuron will fire will take a constant number of flops.
I’ll write another comment which attempts to redo your calculation with different assumptions.
It seems to me that by the time we can do that, we should have figured out a better way to create AI.
But will we have figured a way to reap the gains of AI safely for humanity?
I vaguely remember reading that with current supercomputers we can simulate a cat brain at 1% speed, even if this isn’t accurate (anymore) it’s probably still a good enough place to start.
The key question is what you consider to be a “simulation”. The predictions such a model makes are far from the way a real cat brain works.
Why not? You are pretty smart, and all you are is a combination of 10^11 or so very “dumb” neurons. Now imagine a “being” which is actually a very large number of human-level intelligences, all interacting...
Yeah, that didn’t came out as clear as it was in my head. If you have access to a large number of suitable less intelligent entities there is no reason you couldn’t combine them into a single, more intelligent entity. The problem I see is about the computational resources required to do so. Some back of the envelope math:
I vaguely remember reading that with current supercomputers we can simulate a cat brain at 1% speed, even if this isn’t accurate (anymore) it’s probably still a good enough place to start. You mention running the simulation for a million years simulated time, let’s assume that we can let the simulation run for a year rather than seconds, that is still 8 orders of magnitude faster than the simulated cat.
But we’re not interested in what a really fast cat can do, we need human level intelligence. According to a quick wiki search, a human brain contains about 100 times as many neurons as a cat brain. If we assume that this scales linearly (which it probably doesn’t) that’s another 2 orders of magnitude.
I don’t know how many orcs you had in mind for this scenario, but let’s assume a million (this is a lot less humans than it took in real life before mathematics took off, but presumably this world is more suited for mathematics to be invented), that is yet another 6 orders of magnitude of processing power that we need.
Putting it all together, we would need a computer that has at least 10^16 times more processing power than modern supercomputers. Granted, that doesn’t take into account a number of simplifications that could be build into the system, but it also doesn’t take into account the other parts of the simulated environment that require processing power. Now I don’t doubt that computers are going to get faster in the future, but 10 quadrillion times faster? It seems to me that by the time we can do that, we should have figured out a better way to create AI.
Here is my attempt at a calculation. Disclaimer: this is based on googling. If you are actually knowledgeable in the subject, please step in and set me right.
There are 10^11 neurons in the human brain.
A neuron will fire about 200 times per second.
It should take a constant number of flops to decide whether a neuron will fire —say 10 flops (no need to solve a differential equation, neural networks usually use some discrete heuristics for something like this)
I want a society of 10^6 orcs running for 10^6 years
As you suggest, lets let the simulation run for a year of real time (moving away at this point from my initial suggestion of 1 second). By my calculations, it seems that in order for this to happen we need a computer that does 2x10^25 flops per second.
According to this
http://www.datacenterknowledge.com/archives/2015/04/15/doe-taps-intel-cray-to-build-worlds-fastest-supercomputer/
...in 2018 we will have a supercomputer that does about 2x10^17 flops per second.
That means we need a computer that is one hundred million times faster than the best computer in 2018.
That is still quite a lot, of course. If Moore’s law was ongoing, this would take ~40 years; but Moore’s law is dying. Still, it is not outside the realm of possibility for, say, the next 100 years.
Edit: By the way, one does not need to literally implement what I suggested—the scheme I suggested is in principle applicable whenever you have a superintelligence, regardless of how it was designed.
Indeed, if we somehow develop an above-human intelligence, rather than trying to make sure its goals are aligned with ours, we might instead let it loose within a simulated world, giving it a preference for continued survival. Just one superintelligence thinking about factoring for a few thousand simulated years would likely be enough to let us factor any number we want. We could even give it have in-simulation ways of modifying its own code.
I think this calculation too conservative. The reason is (as I understand it) that neurons are governed by various differential equations, and simulating them accurately is a pain in the ass. We should instead assume that deciding whether a neuron will fire will take a constant number of flops.
I’ll write another comment which attempts to redo your calculation with different assumptions.
But will we have figured a way to reap the gains of AI safely for humanity?
The key question is what you consider to be a “simulation”. The predictions such a model makes are far from the way a real cat brain works.