It’s simplistic for this crowd, but something that bothered me for a while. When I first saw Kurzweil speak in person (GDC 2008) he of course showed both linear and log scale plots. But I always thought the log scale plots were just a convenient way to fit more on the screen, that the “real” behavior was more like the linear scale plot, building to a dramatic steep slope in the coming years.
Instead I now believe in many cases the log plot is closer to “the real thing” or at least how we perceive that thing. For example in the post I talk about computational capacity. I believe the exponential increase is capacity translates into a perceived linear increase in utility. A computer twice as fast is only incrementally more useful, in terms of what applications can be run. This holds true today and will hold true in 2040 or any other year.
Therefore computational utility is incrementally increasing today and will be incrementally increasing in 2040 or any future date. It’s not building to some dramatic peak.
None of this says anything against the possibility of a Singularity. If you pass the threshold where machine intelligence is possible, you pass it, whatever the perceived rate of progress at the time.
IMO, those interested in computational limits should discuss per-kg figures.
The metric Moore’s law uses is not much use really—since it would be relatively easy to make large asynchronous ICs with lots of faults—which would make a complete mess of the “law”.
I would love to see an ongoing big wiki-style FAQ addressing all possible received critics of the singularity — of course, refuting the refutable ones, accepting the sensible.
A version with steroids of what this one did with Atheism.
Team would be:
one guy inviting and sorting out criticism and updating the website.
an ad hoc team of responders.
It seems criticism and answers have been scattered all over. There seems to be no one-stop source for that.
The authors are—or were—SI fellows, though—and the SI is a major Singularity promoter. Is that really a sensible place to go for Singularity criticism?
I called the bluff on the exponential itself, but I was willing to believe that crossing the brain-equivalent threshold and the rise of machine intelligence could produce some kind of sudden acceleration or event. I felt The Singularity wasn’t going to happen because of exponential growth itself, but might still happen because of where exponential growth takes us.
But you make a very good case that the whole thing is bunk. I especially like the “different levels of intelligence” point, had not heard that before re: AI.
But I find it still tempting though to say there is just something special about machines that can design other machines. That like pointing a camcorder at a TV screen it leads to some kind of instant recursion. But maybe it is similar, a neat trick but not something which changes everything all of a sudden.
I wonder if someone 50 years ago said “some day computers will display high quality video and everyone will watch computers instead of TV or film”. Sure it is happening, but it’s a rather long slow transition which in fact might never 100% complete. Maybe AI is more like that.
IIRC, Vinge said that the Singularity might look like a shockingly sudden jump from an earlier point of view, but looking back over it, it might seem like a comprehensible if somewhat bumpy road.
It hasn’t been fast, but I think a paleolithic human would have a hard time understanding how an economic crisis is possible.
I’m starting to believe term The Singularity can be replaced with The Future without any loss. Here is something from The Singularity Institute with the substitution made:
But the real heart of the The Future is the idea of better intelligence or smarter minds. Humans are not just bigger chimps; we are better chimps. This is the hardest part of the The Future to discuss – it’s easy to look at a neuron and a transistor and say that one is slow and one is fast, but the mind is harder to understand. Sometimes discussion of the The Future tends to focus on faster brains or bigger brains because brains are relatively easy to argue about compared to minds; easier to visualize and easier to describe.
I don’t think it’s gotten that vacuous, at least as SIAI uses it. (They tend to use it pretty narrowly to refer to the intelligence explosion point, at least the people there whom I’ve talked to. The Summit is a bit broader, but I suppose that’s to be expected, what with Kurzweil’s involvement and the need to fill two days with semi-technical and non-technical discussion of intelligence-related technology, science, and philosophy.) You say that it can be replaced with “the future” without any loss, but your example doesn’t really bear that out. If I stumbled upon that passage not knowing it’s origin, I’d be pretty confused by how it keeps talking about “the future” as though some point about increasing intelligence had already been established as fundamental. (Indeed, the first sentence of that essay defines the Singularity as “the technological creation of smarter-than-human intelligence”, thereby establishing a promise to use it consistently to mean that, and you can’t change that to “the future” without being very very confusing to anyone who has heard the word “future” before.)
It may be possible to do a less-lossy Singularity → Future substitution on writings by people who’ve read “The Singularity Is Near” and then decided to be futurists too, but even Kurzweil himself doesn’t use the word so generally.
You are right, it was an exaggeration to say you can swap Singularity with Future everywhere. But it’s an exaggeration born out of a truth. Many things said about The Singularity are simply things we could say about the future. They are true today but will be true again in 2045 or 2095 or any year.
This comes back to the root post and the perfectly smooth nature of the exponential. While smoothness implies there is nothing special brewing in 30 years, it also implies 30 years from now things will look remarkably like today. We will be staring at an upcoming billion-fold improvement in computer capacity and marveling over how it will change everything. Which it will.
Kruzweil says The Singularity is just “an event which is hard to see beyond”. I submit every 30 year chunk of time is “hard to see beyond”. It’s long enough time that things will change dramatically. That has always been true and always will be.
I am not sure what you mean about the “different levels of intelligence” point. Maybe this:
“A machine intelligence that is of “roughly human-level” is actually likely to be either vastly superior in some domans or vastly inferior in others—simply because machine intelligence so far has proven to be so vastly different from our own in terms of its strengths and weaknesses [...]”
Actually by “different levels of intelligence” I meant your point that humans themselves have very different levels of intelligence, one from the other. That “human-level AI” is a very broad target, not a narrow one.
I’ve never seen it discussed does an AI require more computation to think about quantum physics than to think about what order to pick up items in the grocery store? How about training time? Is it a little more or orders of magnitude more? I don’t think it is known.
Human intelligence can go down pretty low at either end of life—and in sickness. There is a bit of a lump of well people in the middle, though—where intelligence is not so widely distributed.
The intelligence required to do jobs is currently even more spread out. As automation progresses, the low end of that range will be gradually swallowed up.
More? If anything, I suspect thinking about quantum physics takes less intelligence; it’s just not what we’ve evolved to do. An abstraction inversion, of sorts.
Hm. I also have this pet theory that some past event (that one near-extinction?) has caused humans to have less variation in intelligence than most other species, thus causing a relatively egalitarian society. Admittedly, this is something I have close to zero evidence for—I’m mostly using it for fiction—but it would be interesting to see, if you’ve got evidence for or (I guess more likely) against.
Human intelligence can go down pretty low at either end of life—and in sickness. There is a bit of a lump in the middle, though—where intelligence is not so widely distributed.
The intelligence required to do jobs is currently even more spread out. As automation progresses, the low end of the ability range will be swallowed up.
Machines designing machines will indeed be a massive change to the way phenotypes evolve. However it is already going on today—to some extent.
I expect machine intelligence won’t surpass human intelligence rapidly—but rather gradually, one faculty at a time. Memory and much calculation have already gone.
The extent to which machines design and build other machines has been gradually increasing for decades—in a process known as “automation”. That process may pick up speed, and perhaps by the time machines are doing more cognitive work than humans it might be going at a reasonable rate.
Automation takes over jobs gradually—partly because the skills needed for those jobs are not really human-level. Many cleaners and bank tellers were not using their brain to its full capacity in their work—and simple machines could do their jobs for them.
However, this bunches together the remaining human workers somewhat—likely increasing the rate at which their jobs will eventually go.
So: possibly relatively rapid and dramatic changes—but most of the ideas used to justify using the “singularity” term seem wrong. Here is some more orthodox terminology:
This is easier to say when you’re near the top of the current curve.
It doesn’t affect me much that my computer can’t handle hi-def youtube, because I’m just a couple of doubling times behind the state of the art.
But if you were using a computer ten doubling times back, you’d have trouble even just reading lesswrong. Even if you overcame the format and software issues, we’d be trading funny cat videos that are bigger than all your storage. You’d get nothing without a helper god to downsample them.
When the singularity approaches, the doubling time will decrease, for some people. Maybe not for all.
Maybe will will /feel/ like a linear increase in utility for the people who’s abilities are being increased right along. For people who are 10 doublings behind and still falling, it will be obvious something is different..
Consider $/MIPS available in the mainstream open market. The doubling time of this can’t go down “for some people”, it can only go down globally. Will this doubling time decrease leading up to the Singularity? Or during it?
I always felt that’s what the Singularity was, an acceleration of Moore’s Law type progress. But I wrote the post because I think it’s easy to see a linear plot of exponential growth and say “look there, it’s shooting through the roof, that will be crazy!”. But in fact it won’t be any crazier than progress is today.
It will require a new growth term, machine intelligence kicking in for example, to actually feel like things are accelerating.
It could if, for example, it were only available in large chunks. If you have $50 today you can’t get the $/MIPS of a $5000 server. You could maybe rent the time, but that requires a high level of knowledge, existing internet access at some level, and an application that is still meaningful on a remote basis.
The first augmentation technology that requires surgery will impose a different kind of ‘cost’. and will spread unevenly even among people who have the money.
It’s also important to note that an increase in doubling time would show up as a /bend/ in a log scale graph, not a straight line.
Yes Kurzweil does show a bend in the real data in several cases. I did not try to duplicate that in my plots, I just did straight doubling every year.
I think any bending in the log scale plot could be fairly called acceleration.
But just the doubling itself, while it leads to ever-increases step sizes, is not acceleration. In the case of computer performance it seems clear exponential growth of power produces only linear growth in utility.
I feel this point is not made clear in all contexts. In presentations I felt some of the linear scale graphs were used to “hype” the idea that everything was speeding up dramatically. I think only the bend points to a “speeding up”.
Did you notice that, as phrased in the link, your bet is about the following event: “[at a certain point in time under a few conditions] it will be interesting to hear Eliezer’s excuses”? Technically, all Eliezer will have to do to win the bet will be to write a boring excuse.
(I am the original Unknown but I had to change my name when we moved from Overcoming Bias to Less Wrong because I don’t know how to access the other account.)
Any chance you and Eliezer could set a date on your bet? I’d like to import the 3 open bets to Prediction Book, but I need a specific date. (PB, rightly, doesn’t do open-ended predictions.)
eg. perhaps 2100, well after many Singularitarians expect some sort of AI, and also well after both of your actuarial death dates.
If we agreed on that date, what would happen in the event that there was no AI by that time and both of us are still alive? (These conditions are surely very unlikely but there has to be some determinate answer anyway.)
donate the money to charity under the view ‘and you’re both wrong, so there!’
say that the prediction is implicitly a big AND - ‘there will be an AI by 2100 AND said first AI will not have… etc.‘, and that the conditions allow ‘short-circuiting’ when any AI is created; with this change, reaching 2100 is a loss on your part.
Like #2, but the loss is on Eliezer’s part (the bet changes to ‘I think there won’t be an AI by 2100, but if there is, it won’t be Friendly and etc.’)
I like #2 better since I dislike implicit premises and this (while you two are still relatively young and healthy) is as good a time as any to clarify the terms. But #1 follows more the Long Bets formula.
Eliezer and I are probably about equally confident that “there will not be AI by 2100, and both Eliezer and Unknown will still be alive” is incorrect. So it doesn’t seem very fair to select either 2 or 3. So option 1 seems better.
I would like feedback on my recent blog post:
http://www.kmeme.com/2010/07/singularity-is-always-steep.html
It’s simplistic for this crowd, but something that bothered me for a while. When I first saw Kurzweil speak in person (GDC 2008) he of course showed both linear and log scale plots. But I always thought the log scale plots were just a convenient way to fit more on the screen, that the “real” behavior was more like the linear scale plot, building to a dramatic steep slope in the coming years.
Instead I now believe in many cases the log plot is closer to “the real thing” or at least how we perceive that thing. For example in the post I talk about computational capacity. I believe the exponential increase is capacity translates into a perceived linear increase in utility. A computer twice as fast is only incrementally more useful, in terms of what applications can be run. This holds true today and will hold true in 2040 or any other year.
Therefore computational utility is incrementally increasing today and will be incrementally increasing in 2040 or any future date. It’s not building to some dramatic peak.
None of this says anything against the possibility of a Singularity. If you pass the threshold where machine intelligence is possible, you pass it, whatever the perceived rate of progress at the time.
My essay on the topic:
http://alife.co.uk/essays/the_singularity_is_nonsense/
See also:
“The Singularity” by Lyle Burkhead—see the section “Exponential functions don’t have singularities!”
http://www.geniebusters.org/29_singularity.html
It’s not exponential, it’s sigmoidal
http://radar.oreilly.com/2007/11/its-not-exponential-its-sigmoi.html
The Singularity Myth
http://www.growth-dynamics.com/articles/Kurzweil.htm
Singularity Skepticism: Exposing Exponential Errors
http://www.youtube.com/watch?v=p_svQQ5g2hk
IMO, those interested in computational limits should discuss per-kg figures.
The metric Moore’s law uses is not much use really—since it would be relatively easy to make large asynchronous ICs with lots of faults—which would make a complete mess of the “law”.
I would love to see an ongoing big wiki-style FAQ addressing all possible received critics of the singularity — of course, refuting the refutable ones, accepting the sensible.
A version with steroids of what this one did with Atheism.
Team would be:
one guy inviting and sorting out criticism and updating the website.
an ad hoc team of responders.
It seems criticism and answers have been scattered all over. There seems to be no one-stop source for that.
Here’s a pretty extensive FAQ, though I have reservations about a lot of the answers.
The authors are—or were—SI fellows, though—and the SI is a major Singularity promoter. Is that really a sensible place to go for Singularity criticism?
http://en.wikipedia.org/wiki/Technological_singularity#Criticism lists some of the objections.
Wow good stuff. Especially liked yours not linked above:
http://alife.co.uk/essays/the_intelligence_explosion_is_happening_now/
I called the bluff on the exponential itself, but I was willing to believe that crossing the brain-equivalent threshold and the rise of machine intelligence could produce some kind of sudden acceleration or event. I felt The Singularity wasn’t going to happen because of exponential growth itself, but might still happen because of where exponential growth takes us.
But you make a very good case that the whole thing is bunk. I especially like the “different levels of intelligence” point, had not heard that before re: AI.
But I find it still tempting though to say there is just something special about machines that can design other machines. That like pointing a camcorder at a TV screen it leads to some kind of instant recursion. But maybe it is similar, a neat trick but not something which changes everything all of a sudden.
I wonder if someone 50 years ago said “some day computers will display high quality video and everyone will watch computers instead of TV or film”. Sure it is happening, but it’s a rather long slow transition which in fact might never 100% complete. Maybe AI is more like that.
IIRC, Vinge said that the Singularity might look like a shockingly sudden jump from an earlier point of view, but looking back over it, it might seem like a comprehensible if somewhat bumpy road.
It hasn’t been fast, but I think a paleolithic human would have a hard time understanding how an economic crisis is possible.
I’m starting to believe term The Singularity can be replaced with The Future without any loss. Here is something from The Singularity Institute with the substitution made:
from http://singinst.org/overview/whatisthesingularity
I don’t think it’s gotten that vacuous, at least as SIAI uses it. (They tend to use it pretty narrowly to refer to the intelligence explosion point, at least the people there whom I’ve talked to. The Summit is a bit broader, but I suppose that’s to be expected, what with Kurzweil’s involvement and the need to fill two days with semi-technical and non-technical discussion of intelligence-related technology, science, and philosophy.) You say that it can be replaced with “the future” without any loss, but your example doesn’t really bear that out. If I stumbled upon that passage not knowing it’s origin, I’d be pretty confused by how it keeps talking about “the future” as though some point about increasing intelligence had already been established as fundamental. (Indeed, the first sentence of that essay defines the Singularity as “the technological creation of smarter-than-human intelligence”, thereby establishing a promise to use it consistently to mean that, and you can’t change that to “the future” without being very very confusing to anyone who has heard the word “future” before.)
It may be possible to do a less-lossy Singularity → Future substitution on writings by people who’ve read “The Singularity Is Near” and then decided to be futurists too, but even Kurzweil himself doesn’t use the word so generally.
You are right, it was an exaggeration to say you can swap Singularity with Future everywhere. But it’s an exaggeration born out of a truth. Many things said about The Singularity are simply things we could say about the future. They are true today but will be true again in 2045 or 2095 or any year.
This comes back to the root post and the perfectly smooth nature of the exponential. While smoothness implies there is nothing special brewing in 30 years, it also implies 30 years from now things will look remarkably like today. We will be staring at an upcoming billion-fold improvement in computer capacity and marveling over how it will change everything. Which it will.
Kruzweil says The Singularity is just “an event which is hard to see beyond”. I submit every 30 year chunk of time is “hard to see beyond”. It’s long enough time that things will change dramatically. That has always been true and always will be.
I think that if The Future were commonly used, it would rapidly acquire all the weird connotations of The Singularity, or worse.
I am not sure what you mean about the “different levels of intelligence” point. Maybe this:
“A machine intelligence that is of “roughly human-level” is actually likely to be either vastly superior in some domans or vastly inferior in others—simply because machine intelligence so far has proven to be so vastly different from our own in terms of its strengths and weaknesses [...]”
Actually by “different levels of intelligence” I meant your point that humans themselves have very different levels of intelligence, one from the other. That “human-level AI” is a very broad target, not a narrow one.
I’ve never seen it discussed does an AI require more computation to think about quantum physics than to think about what order to pick up items in the grocery store? How about training time? Is it a little more or orders of magnitude more? I don’t think it is known.
Human intelligence can go down pretty low at either end of life—and in sickness. There is a bit of a lump of well people in the middle, though—where intelligence is not so widely distributed.
The intelligence required to do jobs is currently even more spread out. As automation progresses, the low end of that range will be gradually swallowed up.
More? If anything, I suspect thinking about quantum physics takes less intelligence; it’s just not what we’ve evolved to do. An abstraction inversion, of sorts.
Hm. I also have this pet theory that some past event (that one near-extinction?) has caused humans to have less variation in intelligence than most other species, thus causing a relatively egalitarian society. Admittedly, this is something I have close to zero evidence for—I’m mostly using it for fiction—but it would be interesting to see, if you’ve got evidence for or (I guess more likely) against.
Human intelligence can go down pretty low at either end of life—and in sickness. There is a bit of a lump in the middle, though—where intelligence is not so widely distributed.
The intelligence required to do jobs is currently even more spread out. As automation progresses, the low end of the ability range will be swallowed up.
Machines designing machines will indeed be a massive change to the way phenotypes evolve. However it is already going on today—to some extent.
I expect machine intelligence won’t surpass human intelligence rapidly—but rather gradually, one faculty at a time. Memory and much calculation have already gone.
The extent to which machines design and build other machines has been gradually increasing for decades—in a process known as “automation”. That process may pick up speed, and perhaps by the time machines are doing more cognitive work than humans it might be going at a reasonable rate.
Automation takes over jobs gradually—partly because the skills needed for those jobs are not really human-level. Many cleaners and bank tellers were not using their brain to its full capacity in their work—and simple machines could do their jobs for them.
However, this bunches together the remaining human workers somewhat—likely increasing the rate at which their jobs will eventually go.
So: possibly relatively rapid and dramatic changes—but most of the ideas used to justify using the “singularity” term seem wrong. Here is some more orthodox terminology:
http://en.wikipedia.org/wiki/Digital_Revolution
http://en.wikipedia.org/wiki/Information_Revolution
I discussed this terminology in a recent video/essay:
http://alife.co.uk/essays/engineering_revolution/
This is easier to say when you’re near the top of the current curve.
It doesn’t affect me much that my computer can’t handle hi-def youtube, because I’m just a couple of doubling times behind the state of the art.
But if you were using a computer ten doubling times back, you’d have trouble even just reading lesswrong. Even if you overcame the format and software issues, we’d be trading funny cat videos that are bigger than all your storage. You’d get nothing without a helper god to downsample them.
When the singularity approaches, the doubling time will decrease, for some people. Maybe not for all.
Maybe will will /feel/ like a linear increase in utility for the people who’s abilities are being increased right along. For people who are 10 doublings behind and still falling, it will be obvious something is different..
Consider $/MIPS available in the mainstream open market. The doubling time of this can’t go down “for some people”, it can only go down globally. Will this doubling time decrease leading up to the Singularity? Or during it?
I always felt that’s what the Singularity was, an acceleration of Moore’s Law type progress. But I wrote the post because I think it’s easy to see a linear plot of exponential growth and say “look there, it’s shooting through the roof, that will be crazy!”. But in fact it won’t be any crazier than progress is today.
It will require a new growth term, machine intelligence kicking in for example, to actually feel like things are accelerating.
It could if, for example, it were only available in large chunks. If you have $50 today you can’t get the $/MIPS of a $5000 server. You could maybe rent the time, but that requires a high level of knowledge, existing internet access at some level, and an application that is still meaningful on a remote basis.
The first augmentation technology that requires surgery will impose a different kind of ‘cost’. and will spread unevenly even among people who have the money.
It’s also important to note that an increase in doubling time would show up as a /bend/ in a log scale graph, not a straight line.
Yes Kurzweil does show a bend in the real data in several cases. I did not try to duplicate that in my plots, I just did straight doubling every year.
I think any bending in the log scale plot could be fairly called acceleration.
But just the doubling itself, while it leads to ever-increases step sizes, is not acceleration. In the case of computer performance it seems clear exponential growth of power produces only linear growth in utility.
I feel this point is not made clear in all contexts. In presentations I felt some of the linear scale graphs were used to “hype” the idea that everything was speeding up dramatically. I think only the bend points to a “speeding up”.
I agree with your post, especially since I expect to win my bet with Eliezer.
Did you notice that, as phrased in the link, your bet is about the following event: “[at a certain point in time under a few conditions] it will be interesting to hear Eliezer’s excuses”? Technically, all Eliezer will have to do to win the bet will be to write a boring excuse.
Eliezer was the one who linked to that: the bet is about whether those conditions will be satisfied.
Anyway, he has already promised (more or less) not to make excuses if I win.
I don’t know what this bet is, and I don’t see a link anywhere in your post.
http://wiki.lesswrong.com/wiki/Bets_registry
(I am the original Unknown but I had to change my name when we moved from Overcoming Bias to Less Wrong because I don’t know how to access the other account.)
Any chance you and Eliezer could set a date on your bet? I’d like to import the 3 open bets to Prediction Book, but I need a specific date. (PB, rightly, doesn’t do open-ended predictions.)
eg. perhaps 2100, well after many Singularitarians expect some sort of AI, and also well after both of your actuarial death dates.
If we agreed on that date, what would happen in the event that there was no AI by that time and both of us are still alive? (These conditions are surely very unlikely but there has to be some determinate answer anyway.)
You could either
donate the money to charity under the view ‘and you’re both wrong, so there!’
say that the prediction is implicitly a big AND - ‘there will be an AI by 2100 AND said first AI will not have… etc.‘, and that the conditions allow ‘short-circuiting’ when any AI is created; with this change, reaching 2100 is a loss on your part.
Like #2, but the loss is on Eliezer’s part (the bet changes to ‘I think there won’t be an AI by 2100, but if there is, it won’t be Friendly and etc.’)
I like #2 better since I dislike implicit premises and this (while you two are still relatively young and healthy) is as good a time as any to clarify the terms. But #1 follows more the Long Bets formula.
Eliezer and I are probably about equally confident that “there will not be AI by 2100, and both Eliezer and Unknown will still be alive” is incorrect. So it doesn’t seem very fair to select either 2 or 3. So option 1 seems better.