IMO, those interested in computational limits should discuss per-kg figures.
The metric Moore’s law uses is not much use really—since it would be relatively easy to make large asynchronous ICs with lots of faults—which would make a complete mess of the “law”.
I would love to see an ongoing big wiki-style FAQ addressing all possible received critics of the singularity — of course, refuting the refutable ones, accepting the sensible.
A version with steroids of what this one did with Atheism.
Team would be:
one guy inviting and sorting out criticism and updating the website.
an ad hoc team of responders.
It seems criticism and answers have been scattered all over. There seems to be no one-stop source for that.
The authors are—or were—SI fellows, though—and the SI is a major Singularity promoter. Is that really a sensible place to go for Singularity criticism?
I called the bluff on the exponential itself, but I was willing to believe that crossing the brain-equivalent threshold and the rise of machine intelligence could produce some kind of sudden acceleration or event. I felt The Singularity wasn’t going to happen because of exponential growth itself, but might still happen because of where exponential growth takes us.
But you make a very good case that the whole thing is bunk. I especially like the “different levels of intelligence” point, had not heard that before re: AI.
But I find it still tempting though to say there is just something special about machines that can design other machines. That like pointing a camcorder at a TV screen it leads to some kind of instant recursion. But maybe it is similar, a neat trick but not something which changes everything all of a sudden.
I wonder if someone 50 years ago said “some day computers will display high quality video and everyone will watch computers instead of TV or film”. Sure it is happening, but it’s a rather long slow transition which in fact might never 100% complete. Maybe AI is more like that.
IIRC, Vinge said that the Singularity might look like a shockingly sudden jump from an earlier point of view, but looking back over it, it might seem like a comprehensible if somewhat bumpy road.
It hasn’t been fast, but I think a paleolithic human would have a hard time understanding how an economic crisis is possible.
I’m starting to believe term The Singularity can be replaced with The Future without any loss. Here is something from The Singularity Institute with the substitution made:
But the real heart of the The Future is the idea of better intelligence or smarter minds. Humans are not just bigger chimps; we are better chimps. This is the hardest part of the The Future to discuss – it’s easy to look at a neuron and a transistor and say that one is slow and one is fast, but the mind is harder to understand. Sometimes discussion of the The Future tends to focus on faster brains or bigger brains because brains are relatively easy to argue about compared to minds; easier to visualize and easier to describe.
I don’t think it’s gotten that vacuous, at least as SIAI uses it. (They tend to use it pretty narrowly to refer to the intelligence explosion point, at least the people there whom I’ve talked to. The Summit is a bit broader, but I suppose that’s to be expected, what with Kurzweil’s involvement and the need to fill two days with semi-technical and non-technical discussion of intelligence-related technology, science, and philosophy.) You say that it can be replaced with “the future” without any loss, but your example doesn’t really bear that out. If I stumbled upon that passage not knowing it’s origin, I’d be pretty confused by how it keeps talking about “the future” as though some point about increasing intelligence had already been established as fundamental. (Indeed, the first sentence of that essay defines the Singularity as “the technological creation of smarter-than-human intelligence”, thereby establishing a promise to use it consistently to mean that, and you can’t change that to “the future” without being very very confusing to anyone who has heard the word “future” before.)
It may be possible to do a less-lossy Singularity → Future substitution on writings by people who’ve read “The Singularity Is Near” and then decided to be futurists too, but even Kurzweil himself doesn’t use the word so generally.
You are right, it was an exaggeration to say you can swap Singularity with Future everywhere. But it’s an exaggeration born out of a truth. Many things said about The Singularity are simply things we could say about the future. They are true today but will be true again in 2045 or 2095 or any year.
This comes back to the root post and the perfectly smooth nature of the exponential. While smoothness implies there is nothing special brewing in 30 years, it also implies 30 years from now things will look remarkably like today. We will be staring at an upcoming billion-fold improvement in computer capacity and marveling over how it will change everything. Which it will.
Kruzweil says The Singularity is just “an event which is hard to see beyond”. I submit every 30 year chunk of time is “hard to see beyond”. It’s long enough time that things will change dramatically. That has always been true and always will be.
I am not sure what you mean about the “different levels of intelligence” point. Maybe this:
“A machine intelligence that is of “roughly human-level” is actually likely to be either vastly superior in some domans or vastly inferior in others—simply because machine intelligence so far has proven to be so vastly different from our own in terms of its strengths and weaknesses [...]”
Actually by “different levels of intelligence” I meant your point that humans themselves have very different levels of intelligence, one from the other. That “human-level AI” is a very broad target, not a narrow one.
I’ve never seen it discussed does an AI require more computation to think about quantum physics than to think about what order to pick up items in the grocery store? How about training time? Is it a little more or orders of magnitude more? I don’t think it is known.
Human intelligence can go down pretty low at either end of life—and in sickness. There is a bit of a lump of well people in the middle, though—where intelligence is not so widely distributed.
The intelligence required to do jobs is currently even more spread out. As automation progresses, the low end of that range will be gradually swallowed up.
More? If anything, I suspect thinking about quantum physics takes less intelligence; it’s just not what we’ve evolved to do. An abstraction inversion, of sorts.
Hm. I also have this pet theory that some past event (that one near-extinction?) has caused humans to have less variation in intelligence than most other species, thus causing a relatively egalitarian society. Admittedly, this is something I have close to zero evidence for—I’m mostly using it for fiction—but it would be interesting to see, if you’ve got evidence for or (I guess more likely) against.
Human intelligence can go down pretty low at either end of life—and in sickness. There is a bit of a lump in the middle, though—where intelligence is not so widely distributed.
The intelligence required to do jobs is currently even more spread out. As automation progresses, the low end of the ability range will be swallowed up.
Machines designing machines will indeed be a massive change to the way phenotypes evolve. However it is already going on today—to some extent.
I expect machine intelligence won’t surpass human intelligence rapidly—but rather gradually, one faculty at a time. Memory and much calculation have already gone.
The extent to which machines design and build other machines has been gradually increasing for decades—in a process known as “automation”. That process may pick up speed, and perhaps by the time machines are doing more cognitive work than humans it might be going at a reasonable rate.
Automation takes over jobs gradually—partly because the skills needed for those jobs are not really human-level. Many cleaners and bank tellers were not using their brain to its full capacity in their work—and simple machines could do their jobs for them.
However, this bunches together the remaining human workers somewhat—likely increasing the rate at which their jobs will eventually go.
So: possibly relatively rapid and dramatic changes—but most of the ideas used to justify using the “singularity” term seem wrong. Here is some more orthodox terminology:
My essay on the topic:
http://alife.co.uk/essays/the_singularity_is_nonsense/
See also:
“The Singularity” by Lyle Burkhead—see the section “Exponential functions don’t have singularities!”
http://www.geniebusters.org/29_singularity.html
It’s not exponential, it’s sigmoidal
http://radar.oreilly.com/2007/11/its-not-exponential-its-sigmoi.html
The Singularity Myth
http://www.growth-dynamics.com/articles/Kurzweil.htm
Singularity Skepticism: Exposing Exponential Errors
http://www.youtube.com/watch?v=p_svQQ5g2hk
IMO, those interested in computational limits should discuss per-kg figures.
The metric Moore’s law uses is not much use really—since it would be relatively easy to make large asynchronous ICs with lots of faults—which would make a complete mess of the “law”.
I would love to see an ongoing big wiki-style FAQ addressing all possible received critics of the singularity — of course, refuting the refutable ones, accepting the sensible.
A version with steroids of what this one did with Atheism.
Team would be:
one guy inviting and sorting out criticism and updating the website.
an ad hoc team of responders.
It seems criticism and answers have been scattered all over. There seems to be no one-stop source for that.
Here’s a pretty extensive FAQ, though I have reservations about a lot of the answers.
The authors are—or were—SI fellows, though—and the SI is a major Singularity promoter. Is that really a sensible place to go for Singularity criticism?
http://en.wikipedia.org/wiki/Technological_singularity#Criticism lists some of the objections.
Wow good stuff. Especially liked yours not linked above:
http://alife.co.uk/essays/the_intelligence_explosion_is_happening_now/
I called the bluff on the exponential itself, but I was willing to believe that crossing the brain-equivalent threshold and the rise of machine intelligence could produce some kind of sudden acceleration or event. I felt The Singularity wasn’t going to happen because of exponential growth itself, but might still happen because of where exponential growth takes us.
But you make a very good case that the whole thing is bunk. I especially like the “different levels of intelligence” point, had not heard that before re: AI.
But I find it still tempting though to say there is just something special about machines that can design other machines. That like pointing a camcorder at a TV screen it leads to some kind of instant recursion. But maybe it is similar, a neat trick but not something which changes everything all of a sudden.
I wonder if someone 50 years ago said “some day computers will display high quality video and everyone will watch computers instead of TV or film”. Sure it is happening, but it’s a rather long slow transition which in fact might never 100% complete. Maybe AI is more like that.
IIRC, Vinge said that the Singularity might look like a shockingly sudden jump from an earlier point of view, but looking back over it, it might seem like a comprehensible if somewhat bumpy road.
It hasn’t been fast, but I think a paleolithic human would have a hard time understanding how an economic crisis is possible.
I’m starting to believe term The Singularity can be replaced with The Future without any loss. Here is something from The Singularity Institute with the substitution made:
from http://singinst.org/overview/whatisthesingularity
I don’t think it’s gotten that vacuous, at least as SIAI uses it. (They tend to use it pretty narrowly to refer to the intelligence explosion point, at least the people there whom I’ve talked to. The Summit is a bit broader, but I suppose that’s to be expected, what with Kurzweil’s involvement and the need to fill two days with semi-technical and non-technical discussion of intelligence-related technology, science, and philosophy.) You say that it can be replaced with “the future” without any loss, but your example doesn’t really bear that out. If I stumbled upon that passage not knowing it’s origin, I’d be pretty confused by how it keeps talking about “the future” as though some point about increasing intelligence had already been established as fundamental. (Indeed, the first sentence of that essay defines the Singularity as “the technological creation of smarter-than-human intelligence”, thereby establishing a promise to use it consistently to mean that, and you can’t change that to “the future” without being very very confusing to anyone who has heard the word “future” before.)
It may be possible to do a less-lossy Singularity → Future substitution on writings by people who’ve read “The Singularity Is Near” and then decided to be futurists too, but even Kurzweil himself doesn’t use the word so generally.
You are right, it was an exaggeration to say you can swap Singularity with Future everywhere. But it’s an exaggeration born out of a truth. Many things said about The Singularity are simply things we could say about the future. They are true today but will be true again in 2045 or 2095 or any year.
This comes back to the root post and the perfectly smooth nature of the exponential. While smoothness implies there is nothing special brewing in 30 years, it also implies 30 years from now things will look remarkably like today. We will be staring at an upcoming billion-fold improvement in computer capacity and marveling over how it will change everything. Which it will.
Kruzweil says The Singularity is just “an event which is hard to see beyond”. I submit every 30 year chunk of time is “hard to see beyond”. It’s long enough time that things will change dramatically. That has always been true and always will be.
I think that if The Future were commonly used, it would rapidly acquire all the weird connotations of The Singularity, or worse.
I am not sure what you mean about the “different levels of intelligence” point. Maybe this:
“A machine intelligence that is of “roughly human-level” is actually likely to be either vastly superior in some domans or vastly inferior in others—simply because machine intelligence so far has proven to be so vastly different from our own in terms of its strengths and weaknesses [...]”
Actually by “different levels of intelligence” I meant your point that humans themselves have very different levels of intelligence, one from the other. That “human-level AI” is a very broad target, not a narrow one.
I’ve never seen it discussed does an AI require more computation to think about quantum physics than to think about what order to pick up items in the grocery store? How about training time? Is it a little more or orders of magnitude more? I don’t think it is known.
Human intelligence can go down pretty low at either end of life—and in sickness. There is a bit of a lump of well people in the middle, though—where intelligence is not so widely distributed.
The intelligence required to do jobs is currently even more spread out. As automation progresses, the low end of that range will be gradually swallowed up.
More? If anything, I suspect thinking about quantum physics takes less intelligence; it’s just not what we’ve evolved to do. An abstraction inversion, of sorts.
Hm. I also have this pet theory that some past event (that one near-extinction?) has caused humans to have less variation in intelligence than most other species, thus causing a relatively egalitarian society. Admittedly, this is something I have close to zero evidence for—I’m mostly using it for fiction—but it would be interesting to see, if you’ve got evidence for or (I guess more likely) against.
Human intelligence can go down pretty low at either end of life—and in sickness. There is a bit of a lump in the middle, though—where intelligence is not so widely distributed.
The intelligence required to do jobs is currently even more spread out. As automation progresses, the low end of the ability range will be swallowed up.
Machines designing machines will indeed be a massive change to the way phenotypes evolve. However it is already going on today—to some extent.
I expect machine intelligence won’t surpass human intelligence rapidly—but rather gradually, one faculty at a time. Memory and much calculation have already gone.
The extent to which machines design and build other machines has been gradually increasing for decades—in a process known as “automation”. That process may pick up speed, and perhaps by the time machines are doing more cognitive work than humans it might be going at a reasonable rate.
Automation takes over jobs gradually—partly because the skills needed for those jobs are not really human-level. Many cleaners and bank tellers were not using their brain to its full capacity in their work—and simple machines could do their jobs for them.
However, this bunches together the remaining human workers somewhat—likely increasing the rate at which their jobs will eventually go.
So: possibly relatively rapid and dramatic changes—but most of the ideas used to justify using the “singularity” term seem wrong. Here is some more orthodox terminology:
http://en.wikipedia.org/wiki/Digital_Revolution
http://en.wikipedia.org/wiki/Information_Revolution
I discussed this terminology in a recent video/essay:
http://alife.co.uk/essays/engineering_revolution/