As with all arguments against strong AI, there are a bunch of unintended consequences.
What prevents someone from, say, simulating a human brain on a computer, then simulating 1,000,000 human brains on a computer, then linking all their cortices with a high-bandwidth connection so that they effectively operate as a superpowered highly-integrated team?
Or carrying out the same feat with biological brains using nanotech?
In both cases, the natural limitations of the human brain have been transcended, and the chances of such objects engineering strong AI go up enormously. You would then have to explain, somehow, why no such extension of human brain capacity can break past the AI barrier.
Why do you think that linking brains together directly would be so much more effective than email?
It’s a premise to a scifi story, where the topology is to be never discussed. If you are to actually think in the detail… how are you planning to connect your million brains?
Let’s say you connect the brains as a 3d lattice, where each connects to 6 neighbours, 100x100x100. Far from closely cooperating team, you get a game of Chinese whispers from brains on one side to brains on the other.
Why do you think that linking brains together directly would be so much more effective than email?
The most obvious answer would be speed. If you can simulate 1,000,000 brains at, say, 1,000 times the speed they would normally operate, the bottleneck becomes communication between nodes.
You don’t need to restrict yourself to a 3d topology. Supercomputers with hundreds of thousands of cores can and do use e.g. 6D topologies. It seems that a far more efficient way to organize the brains would be how organizations work in real life: hierarchical structure, where each node is at most O(log n) steps away from any other node.
If brains are 1000x faster, they type the emails 1000x faster as well.
Why do you think, exactly, that brains are going to correctly integrate into some single super mind at all? Things like memory retrieval, short term memory, etc etc. have specific structures, those structures, they do not extend across your network.
So you got your hierarchical structure, so you ask the top head to mentally multiply two 50 digit numbers, why do you think that the whole thing will even be able to recite the numbers back at you, let alone perform any calculations?
Note that rather than connecting the cortices, you could just provide each brain with fairly normal computer with which to search the works of other brains and otherwise collaborate, the way mankind already does.
What prevents someone from, say, simulating a human brain on a computer, then simulating 1,000,000 human brains on a computer, then linking all their cortices with a high-bandwidth connection so that they effectively operate as a superpowered highly-integrated team?
Perhaps the brains would go crazy in some way. Not necessary emotionally, but for example because such connection would amplify some human biases.
Humans already have a lot of irrationalities, but let’s suppose (for the sake of a sci-fi story) that the smartest ones among us already are at the local maximum of rationality. Any change in brain structure would make them less rational. (It’s not necessary a global maximum; smarter minds can still be possible, but would need a radically different architecture. And humans are not smart enough to do this successfully.) So any experiments by simulating humans in computers would end up with something less rational than humans.
Also, let’s suppose that human brain is very sensitive to some microscopic details, so any simplified simulations are dumb or even unconscious, and atom-by-atom simulations are too slow. This would disallow even “only as smart as a human, but 100 times faster” AIs.
That’s a good argument. What you’re basically saying is that the design of the human brain occupies a sort of hill in design space that is very hard to climb out of. Now, if the utility function is “Survive as a hunter-gatherer in sub-saharan Africa,” that is a very reasonable (heck, a very likely) possibility. But evolution hasn’t optimized us for doing stuff like designing algorithms and so forth. If you change the utility function to “Design superintelligence”, then the landscape changes, and hills start to look like valleys and so on. What I’m saying is that there’s no reason to think that we’re even at a local optimum for “design a superintelligence”.
Sure. But let’s say we adjust ourselves so we reach that local maximum (say, hypothetically speaking, we use genetic engineering to push ourselves to the point where the average human is 10% smarter then Albert Einstein, and that it turns out that’s about as smart as you can get with our brain architecture without developing serious problems). There’s still no guarantee that even that would be good enough to develop a real GAI; we can’t really say what the difficulty of that is until we do it.
As with all arguments against strong AI, there are a bunch of unintended consequences.
What prevents someone from, say, simulating a human brain on a computer, then simulating 1,000,000 human brains on a computer, then linking all their cortices with a high-bandwidth connection so that they effectively operate as a superpowered highly-integrated team?
Or carrying out the same feat with biological brains using nanotech?
In both cases, the natural limitations of the human brain have been transcended, and the chances of such objects engineering strong AI go up enormously. You would then have to explain, somehow, why no such extension of human brain capacity can break past the AI barrier.
Why do you think that linking brains together directly would be so much more effective than email?
It’s a premise to a scifi story, where the topology is to be never discussed. If you are to actually think in the detail… how are you planning to connect your million brains?
Let’s say you connect the brains as a 3d lattice, where each connects to 6 neighbours, 100x100x100. Far from closely cooperating team, you get a game of Chinese whispers from brains on one side to brains on the other.
The most obvious answer would be speed. If you can simulate 1,000,000 brains at, say, 1,000 times the speed they would normally operate, the bottleneck becomes communication between nodes.
You don’t need to restrict yourself to a 3d topology. Supercomputers with hundreds of thousands of cores can and do use e.g. 6D topologies. It seems that a far more efficient way to organize the brains would be how organizations work in real life: hierarchical structure, where each node is at most O(log n) steps away from any other node.
If brains are 1000x faster, they type the emails 1000x faster as well.
Why do you think, exactly, that brains are going to correctly integrate into some single super mind at all? Things like memory retrieval, short term memory, etc etc. have specific structures, those structures, they do not extend across your network.
So you got your hierarchical structure, so you ask the top head to mentally multiply two 50 digit numbers, why do you think that the whole thing will even be able to recite the numbers back at you, let alone perform any calculations?
Note that rather than connecting the cortices, you could just provide each brain with fairly normal computer with which to search the works of other brains and otherwise collaborate, the way mankind already does.
Perhaps the brains would go crazy in some way. Not necessary emotionally, but for example because such connection would amplify some human biases.
Humans already have a lot of irrationalities, but let’s suppose (for the sake of a sci-fi story) that the smartest ones among us already are at the local maximum of rationality. Any change in brain structure would make them less rational. (It’s not necessary a global maximum; smarter minds can still be possible, but would need a radically different architecture. And humans are not smart enough to do this successfully.) So any experiments by simulating humans in computers would end up with something less rational than humans.
Also, let’s suppose that human brain is very sensitive to some microscopic details, so any simplified simulations are dumb or even unconscious, and atom-by-atom simulations are too slow. This would disallow even “only as smart as a human, but 100 times faster” AIs.
That’s a good argument. What you’re basically saying is that the design of the human brain occupies a sort of hill in design space that is very hard to climb out of. Now, if the utility function is “Survive as a hunter-gatherer in sub-saharan Africa,” that is a very reasonable (heck, a very likely) possibility. But evolution hasn’t optimized us for doing stuff like designing algorithms and so forth. If you change the utility function to “Design superintelligence”, then the landscape changes, and hills start to look like valleys and so on. What I’m saying is that there’s no reason to think that we’re even at a local optimum for “design a superintelligence”.
Sure. But let’s say we adjust ourselves so we reach that local maximum (say, hypothetically speaking, we use genetic engineering to push ourselves to the point where the average human is 10% smarter then Albert Einstein, and that it turns out that’s about as smart as you can get with our brain architecture without developing serious problems). There’s still no guarantee that even that would be good enough to develop a real GAI; we can’t really say what the difficulty of that is until we do it.