What prevents someone from, say, simulating a human brain on a computer, then simulating 1,000,000 human brains on a computer, then linking all their cortices with a high-bandwidth connection so that they effectively operate as a superpowered highly-integrated team?
Perhaps the brains would go crazy in some way. Not necessary emotionally, but for example because such connection would amplify some human biases.
Humans already have a lot of irrationalities, but let’s suppose (for the sake of a sci-fi story) that the smartest ones among us already are at the local maximum of rationality. Any change in brain structure would make them less rational. (It’s not necessary a global maximum; smarter minds can still be possible, but would need a radically different architecture. And humans are not smart enough to do this successfully.) So any experiments by simulating humans in computers would end up with something less rational than humans.
Also, let’s suppose that human brain is very sensitive to some microscopic details, so any simplified simulations are dumb or even unconscious, and atom-by-atom simulations are too slow. This would disallow even “only as smart as a human, but 100 times faster” AIs.
That’s a good argument. What you’re basically saying is that the design of the human brain occupies a sort of hill in design space that is very hard to climb out of. Now, if the utility function is “Survive as a hunter-gatherer in sub-saharan Africa,” that is a very reasonable (heck, a very likely) possibility. But evolution hasn’t optimized us for doing stuff like designing algorithms and so forth. If you change the utility function to “Design superintelligence”, then the landscape changes, and hills start to look like valleys and so on. What I’m saying is that there’s no reason to think that we’re even at a local optimum for “design a superintelligence”.
Sure. But let’s say we adjust ourselves so we reach that local maximum (say, hypothetically speaking, we use genetic engineering to push ourselves to the point where the average human is 10% smarter then Albert Einstein, and that it turns out that’s about as smart as you can get with our brain architecture without developing serious problems). There’s still no guarantee that even that would be good enough to develop a real GAI; we can’t really say what the difficulty of that is until we do it.
Perhaps the brains would go crazy in some way. Not necessary emotionally, but for example because such connection would amplify some human biases.
Humans already have a lot of irrationalities, but let’s suppose (for the sake of a sci-fi story) that the smartest ones among us already are at the local maximum of rationality. Any change in brain structure would make them less rational. (It’s not necessary a global maximum; smarter minds can still be possible, but would need a radically different architecture. And humans are not smart enough to do this successfully.) So any experiments by simulating humans in computers would end up with something less rational than humans.
Also, let’s suppose that human brain is very sensitive to some microscopic details, so any simplified simulations are dumb or even unconscious, and atom-by-atom simulations are too slow. This would disallow even “only as smart as a human, but 100 times faster” AIs.
That’s a good argument. What you’re basically saying is that the design of the human brain occupies a sort of hill in design space that is very hard to climb out of. Now, if the utility function is “Survive as a hunter-gatherer in sub-saharan Africa,” that is a very reasonable (heck, a very likely) possibility. But evolution hasn’t optimized us for doing stuff like designing algorithms and so forth. If you change the utility function to “Design superintelligence”, then the landscape changes, and hills start to look like valleys and so on. What I’m saying is that there’s no reason to think that we’re even at a local optimum for “design a superintelligence”.
Sure. But let’s say we adjust ourselves so we reach that local maximum (say, hypothetically speaking, we use genetic engineering to push ourselves to the point where the average human is 10% smarter then Albert Einstein, and that it turns out that’s about as smart as you can get with our brain architecture without developing serious problems). There’s still no guarantee that even that would be good enough to develop a real GAI; we can’t really say what the difficulty of that is until we do it.