It won’t be any smarter at all actually, it will just have more relative time.
Basically, if you take someone, and give them 100 days to do something, they will have 100 times as much time to do it as they would if it takes 1 day, but if it is beyond their capabilities, then it will remain beyond their capabilities, and running at 100x speed is only helpful for projects for which mental time is the major factor—if you have to run experiments and wait for results, all you’re really doing is decreasing the lag time between experiments, and even then only potentially.
Its not even as good as having 100 slaves work on a project (as someone else posited) because you’re really just having ONE slave work on the project for 100 days; copying them 100 times likely won’t help that issue.
This is one of the fundamental problems with the idea of the singularity in the first place; the truth is that designing more intelligent intelligences is probably HARDER than designing simpler ones, possibly by orders of magnitude, and it may not be scalar at all. If you look at rodent brains and human brains, there are numerous differences between them—scaling up a rodent brain to the same EQ as a human brain would NOT give you something as smart as a human, or even sapient.
You are very likely to see declining returns, not accelerating returns, which is exactly what we see in all other fields of technology—the higher you get, the harder it is to go further.
Moreover, it isn’t even clear what a “superhuman” intelligence even means. We don’t even have any way of measuring intelligence absolutely that I am aware of—IQ is a statistical means, as are standardized tests. We can’t say that human A is twice as smart as human B, and without such metrication it may be difficult to determine just how much smarter anything is than a human in the first place. If four geniuses can work together and get the same result as a computer which takes 1000 times as much energy to do the same task, then the computer is inefficient no matter how smart it is.
This efficiency is ANOTHER major barrier as well—human brains run off of cherrios, whereas any AI we build is going to be massively less efficient in terms of energy usage per cycle, at least for the foreseeable futures.
Another question is whether there is some sort of effective cap to intelligence given energy, heat dissipation, proximity of processing centers, ect. Given that we’re only going to see microchips 256 times as dense on a plane as we have presently available, and given the various issues with heat dissipation of 3D chips (not to mention expense), we may well run into some barriers here.
I was looking at some stuff last night and while people claim we may be able to model the brain using an exascale computer, I am actually rather skeptical after reading up on it—while 150 trillion connections between 86 billion neurons doesn’t sound like that much on the exascale, we have a lot of other things, such as glial cells, which appear to play a role in intelligence, and it is not unlikely that their function is completely vital in a proper simulation. Indeed, our utter lack of understanding of how the human brain works is a major barrier for even thinking about how we can make something more intelligent than a human which is not a human—its pretty much pure fantasy at this point. It may be that ridiculous parallelization with low latency is absolutely vital for sapience, and that could very well put a major crimp on silicon-based intelligences at all, due to their more linear nature, even with things like GPUs and multicore processors because the human brain is sending out trillions of signals with each step.
Some possibilities for simulating the human brain could easily take 10^22 FLOPS or more, and given the limitations of transistor-based computing, that looks like it is about the level of supercomputer we’d have in 2030 or so—but I wouldn’t expect much better than that beyond that point because the only way to make better processors at that point is going up or out, and to what extent we can continue doing that… well, we’ll have to see, but it would very likely eat up even more power and I would have to question the ROI at some point. We DO need to figure out how intelligence works, if only because it might make enhancing humans easier—indeed, unless intelligence is highly computationally efficient, organic intelligences may well be the optimal solution from the standpoint of efficiency, and no sort of exponential takeoff is really possible, or even likely, with such.
You are very likely to see declining returns, not accelerating returns, which is exactly what we see in all other fields of technology—the higher you get, the harder it is to go further.
In many fields of technology, we see sigmoid curves, where initial advancements lead to accelerating returns until it becomes difficult to move further ahead without running up against hard problems or fundamental limits, and returns diminish.
Making an artificial intelligence as capable as a human intelligence may be difficult, but that doesn’t mean that if we reach that point, we’ll be facing major barriers to further progression. I would say we don’t have much evidence to suggest humans are even near the ceiling of what’s strictly possible with a purely biological intelligence; we’ve had very little opportunity for further biological development since the point when cultural developments started accounting for most of our environmental viability, plus we face engineering challenges such as only being able to shove so large a cranium through a bipedal pelvis.
We have no way to even measure intelligence, let alone determine how close to capacity we’re at. We could be 90% there, or 1%, and we have no way, presently, of distinguishing between the two.
We are the smartest creatures ever to have lived on the planet Earth as far as we can tell, and given that we have seen no signs of extraterrestrial civilization, we could very well be the most intelligent creatures in the galaxy for all we know.
As for shoving out humans, isn’t the simplest solution to that simply growing them in artificial wombs?
We already have a simpler solution than that, namely the Cesarian section. It hasn’t been a safe option long enough to have had a significant impact as an evolutionary force though. Plus, there hasn’t been a lot of evolutionary pressure for increased intelligence since the advent of agriculture.
We might be the most intelligent creatures in the galaxy, but that’s a very different matter from being near the most intelligent things that could be constructed out of a comparable amount of matter. Natural selection isn’t that great a process for optimizing intelligence, it’s backpedaled on hominids before given the right niche to fill, so while we don’t have a process for measuring how close we are to the ceiling, I think the reasonable prior on our being close to it is pretty low.
It won’t be any smarter at all actually, it will just have more relative time.
Basically, if you take someone, and give them 100 days to do something, they will have 100 times as much time to do it as they would if it takes 1 day, but if it is beyond their capabilities, then it will remain beyond their capabilities, and running at 100x speed is only helpful for projects for which mental time is the major factor—if you have to run experiments and wait for results, all you’re really doing is decreasing the lag time between experiments, and even then only potentially.
Its not even as good as having 100 slaves work on a project (as someone else posited) because you’re really just having ONE slave work on the project for 100 days; copying them 100 times likely won’t help that issue.
This is one of the fundamental problems with the idea of the singularity in the first place; the truth is that designing more intelligent intelligences is probably HARDER than designing simpler ones, possibly by orders of magnitude, and it may not be scalar at all. If you look at rodent brains and human brains, there are numerous differences between them—scaling up a rodent brain to the same EQ as a human brain would NOT give you something as smart as a human, or even sapient.
You are very likely to see declining returns, not accelerating returns, which is exactly what we see in all other fields of technology—the higher you get, the harder it is to go further.
Moreover, it isn’t even clear what a “superhuman” intelligence even means. We don’t even have any way of measuring intelligence absolutely that I am aware of—IQ is a statistical means, as are standardized tests. We can’t say that human A is twice as smart as human B, and without such metrication it may be difficult to determine just how much smarter anything is than a human in the first place. If four geniuses can work together and get the same result as a computer which takes 1000 times as much energy to do the same task, then the computer is inefficient no matter how smart it is.
This efficiency is ANOTHER major barrier as well—human brains run off of cherrios, whereas any AI we build is going to be massively less efficient in terms of energy usage per cycle, at least for the foreseeable futures.
Another question is whether there is some sort of effective cap to intelligence given energy, heat dissipation, proximity of processing centers, ect. Given that we’re only going to see microchips 256 times as dense on a plane as we have presently available, and given the various issues with heat dissipation of 3D chips (not to mention expense), we may well run into some barriers here.
I was looking at some stuff last night and while people claim we may be able to model the brain using an exascale computer, I am actually rather skeptical after reading up on it—while 150 trillion connections between 86 billion neurons doesn’t sound like that much on the exascale, we have a lot of other things, such as glial cells, which appear to play a role in intelligence, and it is not unlikely that their function is completely vital in a proper simulation. Indeed, our utter lack of understanding of how the human brain works is a major barrier for even thinking about how we can make something more intelligent than a human which is not a human—its pretty much pure fantasy at this point. It may be that ridiculous parallelization with low latency is absolutely vital for sapience, and that could very well put a major crimp on silicon-based intelligences at all, due to their more linear nature, even with things like GPUs and multicore processors because the human brain is sending out trillions of signals with each step.
Some possibilities for simulating the human brain could easily take 10^22 FLOPS or more, and given the limitations of transistor-based computing, that looks like it is about the level of supercomputer we’d have in 2030 or so—but I wouldn’t expect much better than that beyond that point because the only way to make better processors at that point is going up or out, and to what extent we can continue doing that… well, we’ll have to see, but it would very likely eat up even more power and I would have to question the ROI at some point. We DO need to figure out how intelligence works, if only because it might make enhancing humans easier—indeed, unless intelligence is highly computationally efficient, organic intelligences may well be the optimal solution from the standpoint of efficiency, and no sort of exponential takeoff is really possible, or even likely, with such.
In many fields of technology, we see sigmoid curves, where initial advancements lead to accelerating returns until it becomes difficult to move further ahead without running up against hard problems or fundamental limits, and returns diminish.
Making an artificial intelligence as capable as a human intelligence may be difficult, but that doesn’t mean that if we reach that point, we’ll be facing major barriers to further progression. I would say we don’t have much evidence to suggest humans are even near the ceiling of what’s strictly possible with a purely biological intelligence; we’ve had very little opportunity for further biological development since the point when cultural developments started accounting for most of our environmental viability, plus we face engineering challenges such as only being able to shove so large a cranium through a bipedal pelvis.
We have no way to even measure intelligence, let alone determine how close to capacity we’re at. We could be 90% there, or 1%, and we have no way, presently, of distinguishing between the two.
We are the smartest creatures ever to have lived on the planet Earth as far as we can tell, and given that we have seen no signs of extraterrestrial civilization, we could very well be the most intelligent creatures in the galaxy for all we know.
As for shoving out humans, isn’t the simplest solution to that simply growing them in artificial wombs?
We already have a simpler solution than that, namely the Cesarian section. It hasn’t been a safe option long enough to have had a significant impact as an evolutionary force though. Plus, there hasn’t been a lot of evolutionary pressure for increased intelligence since the advent of agriculture.
We might be the most intelligent creatures in the galaxy, but that’s a very different matter from being near the most intelligent things that could be constructed out of a comparable amount of matter. Natural selection isn’t that great a process for optimizing intelligence, it’s backpedaled on hominids before given the right niche to fill, so while we don’t have a process for measuring how close we are to the ceiling, I think the reasonable prior on our being close to it is pretty low.