I really like what SIAI is trying to do, the spirit that it embodies.
However I am getting more skeptical of any projections or projects based on non-good old fashioned scientific knowledge (my own included).
You can progress scientifically to make AI if you copy human architecture somewhat. By making predictions about how the brain works and organises itself. However I don’t see how we can hope make significant progress on non-human AI. How will we test whether our theories are correct or on the right path? For example, what evidence from the real world would convince the SIAI to abandon the search for a fixed decision theory as a module of the AI. And why isn’t SIAI looking for the evidence, to make sure that you aren’t wasting your time?
For every Einstein that makes the “right” cognitive leap there are probably many orders of magnitudes of more Kelvin’s that do things like predict that meteors provide fuel for the sun.
How are you going to winnow out the wrong ideas if they are consistent with everything we know, especially if they are pure mathematical constructs.
You can progress scientifically to make AI if you copy human architecture somewhat.
I think you’re making the mistake of relying too heavily on our one sample of a general intelligence: the human brain. How do we know which parts to copy and which parts to discard? To draw an analogy to flight, how can we tell which parts of the brain are equivalent to a bird’s beak and which parts are equivalent to wings? We need to understand intelligence before we can successfully implement it. Research on the human brain is expensive, requires going through a lot of red tape, and it’s already being done by other groups. More importantly, planes do not fly because they are similar to birds. Planes fly because we figured out a theory of aerodynamics. Planes would fly just as well if no birds ever existed, and explaining aerodynamics doesn’t require any talk of birds.
I don’t see how we can hope make significant progress on non-human AI. How will we test whether our theories are correct or on the right path?
I don’t see how we can hope to make significant progress on non-bird flight. How will we test whether our theories are correct or on the right path?
Just because you can’t think of a way to solve a problem doesn’t mean that a solution is intractable. We don’t yet have the equivalent of a theory of aerodynamics for intelligence, but we do know that it is a computational process. Any algorithm, including whatever makes up intelligence, can be expressed mathematically.
As to the rest of your comment, I can’t really respond to the questions about SIAI’s behavior, since I don’t know much about what they’re up to.
The bird analogy rubs me the wrong way more and more. I really don’t think it’s a fair comparison. Flight is based on some pretty simple principles, intelligence not necessarily so. If intelligence turns out to be fundamentally complex then emulating a physical brain might be the easiest way to create AI. Certainly intelligence might have some nice underlying theory, so we should pursue that angle as well, but I don’t see how we can be certain either way.
Flight is based on some pretty simple principles, intelligence not necessarily so.
I think the analogy still maps even if this is true. We can’t build useful AIs until we really understand intelligence. This holds no matter how complicated intelligence ends up being.
If intelligence turns out to be fundamentally complex then emulating a physical brain might be the easiest way to create AI.
First, nothing is “fundamentally complex.” (See the reductionism sequence.) Second, brain emulation won’t work for FAI because humans are not stable goal systems over long periods of time.
Right, the two things you must weigh and ‘choose’ between (in the sense of research, advocacy, etc):
1) Go for FAI, with the chance that AGI comes first
2) Go for uploads, with the chance they go crazy when self modifying
You don’t get provable friendless with uploads without understanding intelligence, but you do get a potential upgrade path to super intelligence that doesn’t result in the total destruction of humanity. The safety of that path may be small, but the probability of developing FAI before AGI is likewise small, so it’s not clear in my mind which option is better.
At the workshop after the Singularity Summit, almost everyone (including Eliezer, Robin, and myself), including all the SIAI people, said they hoped that uploads would be developed before AGI. The only folk who took the other position were those actively working on AGI (but not FAI) themselves.
Also, people at SIAI and FHI are working on papers on strategies for safer upload deployment.
Interesting, thanks for sharing that. I take it then that it was generally agreed that the time frame for FAI was probably substantially shorter than for uploads?
Separate (as well as overlapping) inputs go into de novo AI and brain emulation, giving two distinct probability distributions. AI development seems more uncertain, so that we should assign substantial probability to it coming before or after brain emulation. If AI comes first/turns out to be easier, then FAI-type safety measures will be extremely important, with less time to prepare, giving research into AI risks very high value.
If brain emulations come first, then shaping the upload transition to improve the odds of solving collective action problems like regulating risky AI development looks relatively promising. Incidentally, however, a lot of useful and as yet unpublished analysis (e.g. implications of digital intelligences that can be copied and run at high speed) is applicable to thinking about both emulation and de novo AI.
I think AGI before human uploads is far more likely. If you have hardware capable of running an upload, the trial-and-error approach to AGI will be a lot easier (in the form of computationally expensive experiments). Also, it is going to be hard to emulate a human brain without knowing how it works (neurons are very complex structures and it is not obvious which component processes need to appear in the emulation), and as you approach that level of knowledge, trial-and-error again becomes easier, in the form of de novo AI inspired by knowledge of how the human brain works.
Maybe you could do a coarse-grained emulation of a living brain by high-resolution fMRI-style sampling, followed by emulation of the individual voxels on the basis of those measurements. You’d be trying to bypass the molecular and cellular complexities, by focusing on the computational behavior of brain microregions. There would still be potential for leakage of discoveries made in this way into the AGI R&D world before a complete human upload was carried out, but maybe this method closes the gap a little.
I can imagine upload of simple nonhuman nervous systems playing a role in the path to AGI, though I don’t think it’s at all necessary—again, if you have hardware capable of running a human upload, you can carry out computational experiments in de novo AI which are currently expensive or impossible. I can also see IA (intelligence augmentation) of human beings through neurohacks, computer-brain interfaces, and sophisticated versions of ordinary (noninvasive) interfaces. I’d rate a Singularity initiated by that sort of IA as considerably more likely than one arising from uploads, unless they’re nondestructive low-resolution MRI-produced uploads. Emulating a whole adult human brain is not just an advanced technological action, it’s a rather specialized one, and I expect the capacity to do so to coincide with the capacity to do IA and AI in a variety of other forms, and for superhuman intelligence to arise first on that front.
To sum up, I think the contenders in the race to produce superintelligence are trial-and-error AGI, theory-driven AGI, and cognitive neuroscience. IA becomes a contender only when cognitive neuroscience advances enough that you know what you’re doing with these neurohacks and would-be enhancements. And uploads are a bit of a parlor trick that’s just not in the running, unless it’s accomplished via modeling the brain as a network of finite-state-machine microregions to be inferred from high-resolution fMRI. :-)
The following is a particular take on the future, hopefully demonstrating a realistic path for uploads occurring before AGI.
Imagine a high fidelity emulation of a small mammal brain (on the order of 1 g) is demonstrated, running at about 1/1000th real time. The computational demand for such a code is roughly a million times less than for emulating a human brain in real time.
Such a demonstration would give immense credibility to whole brain emulations, even of humans. It’s not unlikely that the military would be willing to suddenly throw billions into WBE research. That is, the military isn’t without imagination, and once the potential for human brain emulation has been shown, it’s easy to see the incredible ramifications they would bring.
The big unknown would be how much optimization could be made to the small brain uploads. If we can’t optimize the emulations’ code, then the only path to human uploads would be through Moore’s law, which would take two decades: ample time for the neuroscience breakthroughs to impact AGI. If, on the other hand, the codes prove to allow large optimizations, then intense funding from the military could get us to human uploads in a matter of years, leaving very little time for AGI theory to catch up.
My own intuition is that the first whole brain emulations will allow for substantial room for optimization.
How valuable is trying to shape the two probability distributions themselves? Should we be devoting resources to encouraging people to do research in computational neuroscience instead of AGI?
It’s hard to change the rate of development of fields. It’s easier to do and publish core analyses of the issues with both approaches, so as to 1) Know better where to focus efforts 2) Make a more convincing case for any reallocation of resources.
“Nonetheless, her colleagues really, seriously expected this bridge to immortality to appear soon. How soon? Well, curiously, the dates they predicted for the Singularity seem to cluster right before the years they were expected to die. Isn’t that a coincidence?”
Possibly the most single disturbing bias-related essay I’ve read, because I realized as I was reading it that my own uploading prediction was very close to my expected lifespan (based on my family history) - only 10 or 20 years past my death. It surprises me sometimes that no one else on LW/OB seems to’ve heard of Kelly’s Maes-Garreau Point.
It’s an interesting methodology, but the Maes-Garreau data is just terrible quality. For every person I know on that list, the attached point estimate is misleading to grossly misleading. For instance, it gives Nick Bostrom as predicting a Singularity in 2004, when Bostrom actually gives a broad probability distribution over the 21st century, with much probability mass beyond it as well. 2004 is in no way a good representative statistic of that distribution, and someone who had read his papers on the subject or emailed him could easily find that out. The Yudkowsky number was the low end of a range (if I say that between 100 and 500 people were at an event, that’s not the same thing as an estimate of 100 people!), and subsequently disavowed in favor of a broader probability distribution regardless. Marvin Minsky is listed as predicting 2070, when he has also given an estimate of most likely “5 to 500” years, and this treatment is inconsistent with the treatment of the previous two estimates. Robin Hanson’s name is spelled incorrectly, and the figure beside his name is grossly unrepresentative of his writing on the subject (available for free on his website for the ‘researcher’ to look at). The listing for Kurzweil gives 2045, which is when Kurzweil expects a Singularity, as he defines it (meaning just an arbitrary benchmark for total computing power), but in his books he suggests that human brain emulation and life extension technology will be available in the previous decade, which would be the “living long enough to live a lot longer” break-even point if he were right about that.
I’m not sure about the others on that list, but given the quality of the observed date, I don’t place much faith in the dataset as a whole. It also seems strangely sparse: where is Turing, or I.J. Good? Dan Dennett, Stephen Hawking, Richard Dawkins, Doug Hofstadter, Martin Rees, and many other luminaries are on record in predicting the eventual creation of superintelligent AI with long time-scales well after their actuarially predicted deaths. I think this search failed to pick up anyone using equivalent language in place of the term ‘Singularity,’ and was skewed as a result. Also, people who think that a technological singularity or the like will probably not occur for over 100 years are less likely to think it an important issue to talk about right now, and so are less likely to appear in a group selected by looking for attention-grabbing pronouncements.
A serious attempt at this analysis would aim at the following:
1) Not using point estimates, which can’t do justice to a probability distribution. Give a survey that lets people assign their probability mass to different periods, or at least specifically ask for an interval, e.g. 80% confidence that an intelligence explosion will have begun/been completed after X but before Y.
2) Emailing the survey to living people to get their actual estimates.
3) Surveying a group identified via some other criterion (like knowledge of AI, note that participants at the AI@50 conference were electronically surveyed on timelines to human-level AI) to reduce selection effects.
It’s an interesting methodology, but the Maes-Garreau data is just terrible quality.
See, this is the sort of response I would expect: a possible bias is identified, some basic data is collected which suggests that it’s plausible, and then we begin a more thorough inspection. Complete silence, though, was not.
where is Turing
Turing would be hard to do. He predicts in 1950 a machine could pass his test 70% of the time in another 50 years (2000; Turing was born 1912, so he would’ve been 88), and that this would be as good as a real mind. But is this a date for the Singularity or a genuine consciousness?
Yes, I considered that ambiguity, and certainly you couldn’t send him a survey. But it gives a lower bound, and Turing does talk about machines equaling or exceeding human capacities across the board.
Hm. Would it be justifiable to extrapolate Turing’s predictions? Because we know that he was off by at least a decade on just the AI; presumably any Singularity would be have to be that much or more.
It surprises me sometimes that no one else on LW/OB seems to’ve heard of Kelly’s Maes-Garreau Point.
It would be very surprising if you are right. I expect most of the people who have thought about the question of how such estimates could be biased would think of this idea within the first several minutes (even if without experimental data).
It may be an obvious point on which to be biased, but how many of such people then go on to work out birthdates and prediction dates or to look for someone else’s work on those lines like Maes-Garreau?
1) Among those sampled, the young do not seem to systematically predict a later Singularity.
2) People do update their estimates based on incremental data (as they should), so we distinguish between estimated dates, and estimated time-from-present.
2a) A lot of people burned by the 1980s AI bubble shifted both of those into the future.
3) A lot of AI folk with experience from that bubble have a strong taboo against making predictions for fear of harming the field by raising expectations. This skews the log of public predictions.
4) Younger people working on AGI (like Shane Legg, Google’s Moshe Looks) are a self-selected group and tend to think that it is relatively close (within decades,and their careers).
5) Random smart folk, not working on AI (physicists, philosophers, economists), of varied ages, tend to put broad distributions on AGI development with central tendencies in the mid-21st century.
Yes. An improved version of the spreadsheet, which serves as the data set for Stuart’s recent writeup, will probably be released when the Stuart+Kaj paper is published, or perhaps earlier.
Yes but shouldn’t we use the earliest predictions by a person? Even a heavily biased person may produce reasonable estimates given enough data. The first few estimates are likely to be based most on intuition—or bias, in another word.
But which way? There may be a publication bias to ‘true believers’ but then there may also be a bias towards unobjectionably far away estimates like Minsky’s 5 to 500 years. (One wonders what odds Minsky genuinely assigns to the first AI being created in 2500 AD.)
Reasonable. Optimism is an incentive to work, and self-deception is probably relevant.
Evidence for, isn’t it? Especially if they assign even weak belief in significant life-extension breakthroughs, ~2050 is within their conceivable lifespan (since they know humans currently don’t live past ~120, they’d have to be >~80 to be sure of not reaching 2050).
Kelly doesn’t give references for the dates he cites as predictions for the singularity. Did Eliezer really predict at some point that the singularity would occur in 2005? That sounds unlikely to me.
A few years back I would have said 2005 to 2020. I got this estimate by taking my real guess at the Singularity, which was around 2008 to 2015, and moving the dates outward until it didn’t seem very likely that the Singularity would occur before then or after then.
Seems to me that Kelly didn’t really interpret the prediction entirely reasonably (picking the earlier date) but the later date would not disconfirm his theory either.
Did Eliezer really predict at some point that the singularity would occur in 2005? That sounds unlikely to me.
Eliezer has disavowed many of his old writings:
I’ve been online since a rather young age. You should regard anything from 2001 or earlier as having been written by a different person who also happens to be named “Eliezer Yudkowsky”. I do not share his opinions.
I do not “project” when the Singularity will occur. I have a “target date”. I would like the Singularity to occur in 2005, which I think I would have a reasonable chance of doing via AI if someone handed me a hundred million dollars a year. The Singularity Institute would like to finish up in 2008 or so.
Re: “Kelly doesn’t give references for the dates he cites as predictions for the singularity.”
That sucks. Also, “the singularity” is said to occur when minds get uploaded?!?
And “all agreed that once someone designed the first super-human artificial intelligence, this AI could be convinced to develop the technology to download a human mind immediately”?!?
I have a rather different take on things on my “On uploads” video:
almost everyone (including Eliezer, Robin, and myself), including all the SIAI people, said they hoped that uploads would be developed before AGI.
If so, I rather hope they keep the original me around too. I think I would prefer the higher res post super-intelligence version than the first versions that work well enough to get a functioning human like being out the other end.
I tentatively agree, there well may be a way to FAI that doesn’t involve normal humans understanding intelligence, but rather improved humans understanding intelligence, for example carefully modified uploads or genetically engineered/selected smarter humans.
Agreed, uploads aren’t provably friendly. But you have to weigh that danger against the danger of AGI arriving before FAI.
I rather suspect uploads would arrive at AGI before their more limited human counterparts. Although I suppose uploading only the right people could theoretically increase the chances of FAI coming first.
Re: “Flight is based on some pretty simple principles, intelligence not necessarily so. If intelligence turns out to be fundamentally complex then emulating a physical brain might be the easiest way to create AI.”
Hmm. Are there many more genes expressed in brains than in wings? IIRC, it’s about equal.
Okay, let us say you want to make a test for intelligence, just as there was a test for the lift generated by a fixed wing.
As you are testing a computational system there are two things you can look at, the input-output relation and the dynamics of the internal system.
Looking purely at the IO relation is not informative, they can be fooled by GLUTs or compressed versions of the same. This is why the loebner prize has not lead to real AI in general. And making a system that can solve a single problem that we consider requires intelligence (such as chess), just gets you a system that can solve chess and does not generalize.
Contrast this with the air tunnels that the wright brothers had, they could test for lift which they knew would keep them up
If you want to get into the dynamics of the internals of the system they are divorced from our folk idea of intelligence which is problem solving (unlike the folk theory of flight, which connects nicely with lift from a wing). So what sort of dynamics should we look for?
If the theory of intelligence is correct the dynamics will have to be found in the human brain. Despite the slowness and difficulties of analysing it it. we are generating more data which we should be able to use to narrow down the dynamics.
How would you go about creating a testable theory of intelligence? Preferably without having to build a many person-year project each time you want to test your theory.
If a wrong idea is both simple and consistent with everything you know, it cannot be winnowed out. You have to either find something simpler or find an inconsistency.
I really like what SIAI is trying to do, the spirit that it embodies.
However I am getting more skeptical of any projections or projects based on non-good old fashioned scientific knowledge (my own included).
You can progress scientifically to make AI if you copy human architecture somewhat. By making predictions about how the brain works and organises itself. However I don’t see how we can hope make significant progress on non-human AI. How will we test whether our theories are correct or on the right path? For example, what evidence from the real world would convince the SIAI to abandon the search for a fixed decision theory as a module of the AI. And why isn’t SIAI looking for the evidence, to make sure that you aren’t wasting your time?
For every Einstein that makes the “right” cognitive leap there are probably many orders of magnitudes of more Kelvin’s that do things like predict that meteors provide fuel for the sun.
How are you going to winnow out the wrong ideas if they are consistent with everything we know, especially if they are pure mathematical constructs.
I think you’re making the mistake of relying too heavily on our one sample of a general intelligence: the human brain. How do we know which parts to copy and which parts to discard? To draw an analogy to flight, how can we tell which parts of the brain are equivalent to a bird’s beak and which parts are equivalent to wings? We need to understand intelligence before we can successfully implement it. Research on the human brain is expensive, requires going through a lot of red tape, and it’s already being done by other groups. More importantly, planes do not fly because they are similar to birds. Planes fly because we figured out a theory of aerodynamics. Planes would fly just as well if no birds ever existed, and explaining aerodynamics doesn’t require any talk of birds.
I don’t see how we can hope to make significant progress on non-bird flight. How will we test whether our theories are correct or on the right path?
Just because you can’t think of a way to solve a problem doesn’t mean that a solution is intractable. We don’t yet have the equivalent of a theory of aerodynamics for intelligence, but we do know that it is a computational process. Any algorithm, including whatever makes up intelligence, can be expressed mathematically.
As to the rest of your comment, I can’t really respond to the questions about SIAI’s behavior, since I don’t know much about what they’re up to.
The bird analogy rubs me the wrong way more and more. I really don’t think it’s a fair comparison. Flight is based on some pretty simple principles, intelligence not necessarily so. If intelligence turns out to be fundamentally complex then emulating a physical brain might be the easiest way to create AI. Certainly intelligence might have some nice underlying theory, so we should pursue that angle as well, but I don’t see how we can be certain either way.
I think the analogy still maps even if this is true. We can’t build useful AIs until we really understand intelligence. This holds no matter how complicated intelligence ends up being.
First, nothing is “fundamentally complex.” (See the reductionism sequence.) Second, brain emulation won’t work for FAI because humans are not stable goal systems over long periods of time.
You’re overreaching. Uploads could clearly be useful, whether we understand how they are working or not.
Agreed, uploads aren’t provably friendly. But you have to weigh that danger against the danger of AGI arriving before FAI.
But you still can’t get to FAI unless you (or the uploads) understand intelligence.
Right, the two things you must weigh and ‘choose’ between (in the sense of research, advocacy, etc):
1) Go for FAI, with the chance that AGI comes first
2) Go for uploads, with the chance they go crazy when self modifying
You don’t get provable friendless with uploads without understanding intelligence, but you do get a potential upgrade path to super intelligence that doesn’t result in the total destruction of humanity. The safety of that path may be small, but the probability of developing FAI before AGI is likewise small, so it’s not clear in my mind which option is better.
At the workshop after the Singularity Summit, almost everyone (including Eliezer, Robin, and myself), including all the SIAI people, said they hoped that uploads would be developed before AGI. The only folk who took the other position were those actively working on AGI (but not FAI) themselves.
Also, people at SIAI and FHI are working on papers on strategies for safer upload deployment.
Interesting, thanks for sharing that. I take it then that it was generally agreed that the time frame for FAI was probably substantially shorter than for uploads?
Separate (as well as overlapping) inputs go into de novo AI and brain emulation, giving two distinct probability distributions. AI development seems more uncertain, so that we should assign substantial probability to it coming before or after brain emulation. If AI comes first/turns out to be easier, then FAI-type safety measures will be extremely important, with less time to prepare, giving research into AI risks very high value.
If brain emulations come first, then shaping the upload transition to improve the odds of solving collective action problems like regulating risky AI development looks relatively promising. Incidentally, however, a lot of useful and as yet unpublished analysis (e.g. implications of digital intelligences that can be copied and run at high speed) is applicable to thinking about both emulation and de novo AI.
I think AGI before human uploads is far more likely. If you have hardware capable of running an upload, the trial-and-error approach to AGI will be a lot easier (in the form of computationally expensive experiments). Also, it is going to be hard to emulate a human brain without knowing how it works (neurons are very complex structures and it is not obvious which component processes need to appear in the emulation), and as you approach that level of knowledge, trial-and-error again becomes easier, in the form of de novo AI inspired by knowledge of how the human brain works.
Maybe you could do a coarse-grained emulation of a living brain by high-resolution fMRI-style sampling, followed by emulation of the individual voxels on the basis of those measurements. You’d be trying to bypass the molecular and cellular complexities, by focusing on the computational behavior of brain microregions. There would still be potential for leakage of discoveries made in this way into the AGI R&D world before a complete human upload was carried out, but maybe this method closes the gap a little.
I can imagine upload of simple nonhuman nervous systems playing a role in the path to AGI, though I don’t think it’s at all necessary—again, if you have hardware capable of running a human upload, you can carry out computational experiments in de novo AI which are currently expensive or impossible. I can also see IA (intelligence augmentation) of human beings through neurohacks, computer-brain interfaces, and sophisticated versions of ordinary (noninvasive) interfaces. I’d rate a Singularity initiated by that sort of IA as considerably more likely than one arising from uploads, unless they’re nondestructive low-resolution MRI-produced uploads. Emulating a whole adult human brain is not just an advanced technological action, it’s a rather specialized one, and I expect the capacity to do so to coincide with the capacity to do IA and AI in a variety of other forms, and for superhuman intelligence to arise first on that front.
To sum up, I think the contenders in the race to produce superintelligence are trial-and-error AGI, theory-driven AGI, and cognitive neuroscience. IA becomes a contender only when cognitive neuroscience advances enough that you know what you’re doing with these neurohacks and would-be enhancements. And uploads are a bit of a parlor trick that’s just not in the running, unless it’s accomplished via modeling the brain as a network of finite-state-machine microregions to be inferred from high-resolution fMRI. :-)
The following is a particular take on the future, hopefully demonstrating a realistic path for uploads occurring before AGI.
Imagine a high fidelity emulation of a small mammal brain (on the order of 1 g) is demonstrated, running at about 1/1000th real time. The computational demand for such a code is roughly a million times less than for emulating a human brain in real time.
Such a demonstration would give immense credibility to whole brain emulations, even of humans. It’s not unlikely that the military would be willing to suddenly throw billions into WBE research. That is, the military isn’t without imagination, and once the potential for human brain emulation has been shown, it’s easy to see the incredible ramifications they would bring.
The big unknown would be how much optimization could be made to the small brain uploads. If we can’t optimize the emulations’ code, then the only path to human uploads would be through Moore’s law, which would take two decades: ample time for the neuroscience breakthroughs to impact AGI. If, on the other hand, the codes prove to allow large optimizations, then intense funding from the military could get us to human uploads in a matter of years, leaving very little time for AGI theory to catch up.
My own intuition is that the first whole brain emulations will allow for substantial room for optimization.
How valuable is trying to shape the two probability distributions themselves? Should we be devoting resources to encouraging people to do research in computational neuroscience instead of AGI?
It’s hard to change the rate of development of fields. It’s easier to do and publish core analyses of the issues with both approaches, so as to 1) Know better where to focus efforts 2) Make a more convincing case for any reallocation of resources.
re: “almost everyone [...] said they hoped that uploads would be developed before AGI”
IMO, that explains much of the interest in uploads: wishful thinking.
Reminds me of Kevin Kelly’s The Maes-Garreau Point:
Possibly the most single disturbing bias-related essay I’ve read, because I realized as I was reading it that my own uploading prediction was very close to my expected lifespan (based on my family history) - only 10 or 20 years past my death. It surprises me sometimes that no one else on LW/OB seems to’ve heard of Kelly’s Maes-Garreau Point.
It’s an interesting methodology, but the Maes-Garreau data is just terrible quality. For every person I know on that list, the attached point estimate is misleading to grossly misleading. For instance, it gives Nick Bostrom as predicting a Singularity in 2004, when Bostrom actually gives a broad probability distribution over the 21st century, with much probability mass beyond it as well. 2004 is in no way a good representative statistic of that distribution, and someone who had read his papers on the subject or emailed him could easily find that out. The Yudkowsky number was the low end of a range (if I say that between 100 and 500 people were at an event, that’s not the same thing as an estimate of 100 people!), and subsequently disavowed in favor of a broader probability distribution regardless. Marvin Minsky is listed as predicting 2070, when he has also given an estimate of most likely “5 to 500” years, and this treatment is inconsistent with the treatment of the previous two estimates. Robin Hanson’s name is spelled incorrectly, and the figure beside his name is grossly unrepresentative of his writing on the subject (available for free on his website for the ‘researcher’ to look at). The listing for Kurzweil gives 2045, which is when Kurzweil expects a Singularity, as he defines it (meaning just an arbitrary benchmark for total computing power), but in his books he suggests that human brain emulation and life extension technology will be available in the previous decade, which would be the “living long enough to live a lot longer” break-even point if he were right about that.
I’m not sure about the others on that list, but given the quality of the observed date, I don’t place much faith in the dataset as a whole. It also seems strangely sparse: where is Turing, or I.J. Good? Dan Dennett, Stephen Hawking, Richard Dawkins, Doug Hofstadter, Martin Rees, and many other luminaries are on record in predicting the eventual creation of superintelligent AI with long time-scales well after their actuarially predicted deaths. I think this search failed to pick up anyone using equivalent language in place of the term ‘Singularity,’ and was skewed as a result. Also, people who think that a technological singularity or the like will probably not occur for over 100 years are less likely to think it an important issue to talk about right now, and so are less likely to appear in a group selected by looking for attention-grabbing pronouncements.
A serious attempt at this analysis would aim at the following:
1) Not using point estimates, which can’t do justice to a probability distribution. Give a survey that lets people assign their probability mass to different periods, or at least specifically ask for an interval, e.g. 80% confidence that an intelligence explosion will have begun/been completed after X but before Y.
2) Emailing the survey to living people to get their actual estimates.
3) Surveying a group identified via some other criterion (like knowledge of AI, note that participants at the AI@50 conference were electronically surveyed on timelines to human-level AI) to reduce selection effects.
See, this is the sort of response I would expect: a possible bias is identified, some basic data is collected which suggests that it’s plausible, and then we begin a more thorough inspection. Complete silence, though, was not.
Turing would be hard to do. He predicts in 1950 a machine could pass his test 70% of the time in another 50 years (2000; Turing was born 1912, so he would’ve been 88), and that this would be as good as a real mind. But is this a date for the Singularity or a genuine consciousness?
Yes, I considered that ambiguity, and certainly you couldn’t send him a survey. But it gives a lower bound, and Turing does talk about machines equaling or exceeding human capacities across the board.
Hm. Would it be justifiable to extrapolate Turing’s predictions? Because we know that he was off by at least a decade on just the AI; presumably any Singularity would be have to be that much or more.
It would be very surprising if you are right. I expect most of the people who have thought about the question of how such estimates could be biased would think of this idea within the first several minutes (even if without experimental data).
It may be an obvious point on which to be biased, but how many of such people then go on to work out birthdates and prediction dates or to look for someone else’s work on those lines like Maes-Garreau?
A lot of folk at SIAI have looked at and for age correlations.
And found?
1) Among those sampled, the young do not seem to systematically predict a later Singularity.
2) People do update their estimates based on incremental data (as they should), so we distinguish between estimated dates, and estimated time-from-present.
2a) A lot of people burned by the 1980s AI bubble shifted both of those into the future.
3) A lot of AI folk with experience from that bubble have a strong taboo against making predictions for fear of harming the field by raising expectations. This skews the log of public predictions.
4) Younger people working on AGI (like Shane Legg, Google’s Moshe Looks) are a self-selected group and tend to think that it is relatively close (within decades,and their careers).
5) Random smart folk, not working on AI (physicists, philosophers, economists), of varied ages, tend to put broad distributions on AGI development with central tendencies in the mid-21st century.
Is there any chance of the actual data or writeups being released? It’s been almost 3 years now.
Lukeprog has a big spreadsheet. I don’t know his plans for it.
Hm… I wonder if that’s the big spreadsheet ksotala has been working on for a while?
Yes. An improved version of the spreadsheet, which serves as the data set for Stuart’s recent writeup, will probably be released when the Stuart+Kaj paper is published, or perhaps earlier.
evidence for, apparently
Yes but shouldn’t we use the earliest predictions by a person? Even a heavily biased person may produce reasonable estimates given enough data. The first few estimates are likely to be based most on intuition—or bias, in another word.
But which way? There may be a publication bias to ‘true believers’ but then there may also be a bias towards unobjectionably far away estimates like Minsky’s 5 to 500 years. (One wonders what odds Minsky genuinely assigns to the first AI being created in 2500 AD.)
Reasonable. Optimism is an incentive to work, and self-deception is probably relevant.
Evidence for, isn’t it? Especially if they assign even weak belief in significant life-extension breakthroughs, ~2050 is within their conceivable lifespan (since they know humans currently don’t live past ~120, they’d have to be >~80 to be sure of not reaching 2050).
Kelly doesn’t give references for the dates he cites as predictions for the singularity. Did Eliezer really predict at some point that the singularity would occur in 2005? That sounds unlikely to me.
Hmm, I found this quote on Google:
Seems to me that Kelly didn’t really interpret the prediction entirely reasonably (picking the earlier date) but the later date would not disconfirm his theory either.
Eliezer has disavowed many of his old writings:
But re the 2005 listing, cf. the now-obsolete “Staring Into the Singularity” (2001):
Doesn’t sound much like the Eliezer you know, does it...
Re: “Kelly doesn’t give references for the dates he cites as predictions for the singularity.”
That sucks. Also, “the singularity” is said to occur when minds get uploaded?!?
And “all agreed that once someone designed the first super-human artificial intelligence, this AI could be convinced to develop the technology to download a human mind immediately”?!?
I have a rather different take on things on my “On uploads” video:
http://www.youtube.com/watch?v=5myjWld1qN0
If so, I rather hope they keep the original me around too. I think I would prefer the higher res post super-intelligence version than the first versions that work well enough to get a functioning human like being out the other end.
I tentatively agree, there well may be a way to FAI that doesn’t involve normal humans understanding intelligence, but rather improved humans understanding intelligence, for example carefully modified uploads or genetically engineered/selected smarter humans.
I rather suspect uploads would arrive at AGI before their more limited human counterparts. Although I suppose uploading only the right people could theoretically increase the chances of FAI coming first.
Re: “Flight is based on some pretty simple principles, intelligence not necessarily so. If intelligence turns out to be fundamentally complex then emulating a physical brain might be the easiest way to create AI.”
Hmm. Are there many more genes expressed in brains than in wings? IIRC, it’s about equal.
Okay, let us say you want to make a test for intelligence, just as there was a test for the lift generated by a fixed wing.
As you are testing a computational system there are two things you can look at, the input-output relation and the dynamics of the internal system.
Looking purely at the IO relation is not informative, they can be fooled by GLUTs or compressed versions of the same. This is why the loebner prize has not lead to real AI in general. And making a system that can solve a single problem that we consider requires intelligence (such as chess), just gets you a system that can solve chess and does not generalize.
Contrast this with the air tunnels that the wright brothers had, they could test for lift which they knew would keep them up
If you want to get into the dynamics of the internals of the system they are divorced from our folk idea of intelligence which is problem solving (unlike the folk theory of flight, which connects nicely with lift from a wing). So what sort of dynamics should we look for?
If the theory of intelligence is correct the dynamics will have to be found in the human brain. Despite the slowness and difficulties of analysing it it. we are generating more data which we should be able to use to narrow down the dynamics.
How would you go about creating a testable theory of intelligence? Preferably without having to build a many person-year project each time you want to test your theory.
Intelligence is defined in terms of response to a variable environment—so you just use an environment with a wide range of different problems in it.
If a wrong idea is both simple and consistent with everything you know, it cannot be winnowed out. You have to either find something simpler or find an inconsistency.