General AI will not be made in 2011. Confidence: 90%.
The removal of DADT by the US military will result in fewer than 300 soldiers leaving the military in protest. (Note that this may be hard to measure.) Confidence: 95%.
The Riemann Hypothesis will not be proven. Confidence: 75%. (Minor note for the nitpickers: relevant foundational system is ZFC.)
Ryan Williams recent bound on ACC circuits of NEXP ((See here for a discussion of Williams work)) will be tightened in at least one of three ways: The result will be shown to apply for some smaller set of problems than NEXP, the result will be improved for some broader type of circuit than ACC, or the bound on the circuit size ruled out will be improved. Confidence: 60%
At least one head pastor of a Protestant megachurch in the US will be found to be engaging in homosexual activity. For purposes of this prediction “megachurch” means a church with regular attendance of 3000 people at Sunday services. Confidence: 70%.
Clashes between North Korea and South Korea will result in fatalities: Confidence 80%.
Given Vast quantities of computing power, it would qualify as a very silly AGI which will eventually try dropping an anvil on its own head just to see what happens.
I’m definitely willing to put $300 to $100 on Riemann not being proved—even knowing that you may have some relevant insider knowledge.
I guess one issue is that we couldn’t expect to know for sure by this time next year if it had been proved or not—I’d be willing to go with something like “by January 1 2012, no preprint will exist of an essentially correct proof of the Riemann Hypothesis (in ZFC)”.
Regarding 4, there are a fair number of people here interested in complexity theory issues so it shouldn’t be that hard to get people to judge that. Also note that I deliberately made the question more precise by listing three of the more plausible ways the result might be extended rather than just that it would be tightened. That helps make the question clear cut (if it were generalized to anything that could be reasonably construed as a tightening of his result I’d bump the probability up but it would be much trickier to judge the result.)
I think your confidence for the Riemann Hypothesis not being proven is way too low. Unless there has been some major recent improvement I’m unaware of, next year doesn’t look too much better than any recent year. In addition to the apparent improbability of being solved in any particular year, I would suspect that a problem of this magnitude would show some more significant cracks before being solved.
If I thought there was anything like a 10% of AGI in the next year, my priorities would be radically different.
A lot of people in this thread have said that I’m way too underconfident about RH, and thinking about that, they are probably right. At this point, there are two “obviously false” statements that we can’t even disprove:
1) There are zeros of the zeta function arbitrarily close to the line Re s =1. 2) A positive fraction of the non-trivial zeros lie off the line (with the zeros ordered in the obvious way by the size of the imaginary part).
We also can’t prove the related Lindelof hypothesis which is an easy consequence of the Riemann hypothesis.
All three of these are things that one would expect to be done before RH is proved, and we don’t seem very close to resolving any of these with the possible exception of statement 2. There’s some reason to believe that 2 might be disproven from further tightening of Hardy and Littlewood type results in the same way that Levinson and Conrey showed that at least two fifths of the zeros lie on the line (I lack the technical knowledge to evaluate the plausibility that Conrey’s type of result can be further tightened, although the fact that his result in the tightest known form has stood for about 20 years suggests that further tightening is very non-trivial.) Note that we can prove the slightly weaker statement that almost all the zeros lie with in epsilon of the 1⁄2 line, which is sort of a hybrid of the negations of 1 and 2, but that result is about a hundred years old (one thing I actually wish I understood more is how if at all that result connects to the Hardy and Littlewood type results. My impression is that they really don’t in any obvious way but I’m not sure.) Analogs of RH have also been proven in many different contexts (including the appropriate analogs for finite fields and for certain p-adic analogs); I don’t know much about most of those, but it seems like those results don’t give obvious hints for how to resolve the original case in any useful fashion.
I guess the real reason for my estimate is that there seem to be so many promising techniques for approaching the problem that I don’t understand much in detail, such as using operator theory, or connecting the location of the zeros to the behavior of some quasicrystals.
I was almost certainly underconfident in my above estimate, and I think everyone was very right to call me out on that, so I won’t be taking any bets on it. I’m not sure how to revise it. Upwards is clearly the correct direction, but thinking about the probability estimate in more details leaves me feeling very confused.
No. One model that could produce that prediction for next year is that one is 89% confident that GAI is impossible, and 90% confident that if possible, it will be discovered next year. Strange beliefs, yes.
But they give 10% chance this year and 11% in the next ten.
General AI will not be made in 2011. Confidence: 90%.
Seems underconfident. Shane Legg, who is an AGI researcher, predicts:
My longest running prediction, since 1999, has been the time until roughly human level AGI. It’s been consistent since then, though last year I decided to clarify things a bit and put down an actual distribution and some parameters. Basically, I gave it a log-normal distribution with a mean of 2028, and a mode of 2025.
Although it really depends on what you mean by “General AI.”
How representative is Legg’s prediction of people in that field? It “feels” very optimistic, but I don’t have the relevant expertise. What do other researchers think?
Anecdotal evidence suggests it assigns way more probability mass to the next few decades (leaving almost none for subsequent development, social collapse, etc) than the vast majority of researchers. Surveys have been conducted among folk strongly selected for belief in AI (e.g. at Ben’s AGI conference linked below, on transhumanist websites, etc) that put a big chunk of probability in that period, but usually not as much as Legg.
Unfortunately, there aren’t yet any representative expert polls, and it would be hard to get an expert class that was expert on neuroscience, AI, and outside factors that could speed up progress (e.g. biotech enhancement). Worse, where folk have expressed skeptical views, they have almost always been just negatives with respect to a particular date, rather than probabilities. It seems fairly likely that the median relevant representative expert would assign a probability over 5% but less than 50% to Legg/Kurzweil timelines, particularly if you factored out mysticism/religion-based skepticism.
EDIT: Here’s another survey.
26 of the contributors to the NSF-backed “Managing nano-bio-info-cogno innovations: converging technologies in society” were surveyed on the date at which various technologies were developed, and the median predictions reported in Appendix 1.
Page 344 gives a median estimate of 2085 for AI functionally equivalent to the human brain:
This is handy as a less selected group than attendees at an Artificial General Intelligence conference, although they’re still folk with a professional interest in futuristic technologies generally.
It seems fairly likely that the median relevant representative expert would assign a probability over 5% but less than 50% to Legg/Kurzweil timelines, particularly if you factored out mysticism/religion-based skepticism.
Does this mean “For any given year, a relevant expert would only assign 1/20-1/2 the probability of FOOM by that year that Legg and Kurzweil do”? If not, what does it mean?
Shane Legg says that there is a 95% probability of human-level AI by 2045. Kurzweil doesn’t give probabilities, but claims high confidence in Turing Test passing AI by 2029 and a slow takeoff Singularity over the next two decades. I would bet that a representative sample of experts would assign less than 50% probability to human-level AI by 2045, but more than 5%.
Shane Legg says that there is a 95% probability of human-level AI by 2045.
I was surprised, his recent post didn’t leave me with this impression, and I didn’t remember the past well enough. But apparently this is correct, here’s the post and visualization of the prediction endorsed by Legg.
At least one head pastor of a Protestant megachurch in the US will be found to be engaging in homosexual activity....70%
Why do you draw attention to this question and how did you reach this estimate?
Unless you have knowledge of some change (eg, newspaper interest), this seems easy to estimate by the frequency of occurrence. This is a question about the steady state of the world, not about the future (modulo the not terribly rapid change in the number of megachurches). If you care enough to mention this, I’d expect you to care enough to have have some gut feeling estimate for this. In particular, 70% odds for the first such scandal to reach those who haven’t heard about earlier scandals (if any) is absurd (again, unless you know about some change).
This question was based on a combination of the base rate and the increasing number of megachurches.
In particular, 70% odds for the first such scandal to reach those who haven’t heard about earlier scandals (if any) is absurd (again, unless you know about some change).
But these scandals do occur. A naive base line rate puts them at around slightly over one every two years. Prior example scandals include Ted Haggard (2006) and Eddie Long (2010). I picked a rate slightly higher than the expected from the historical base rate primarily since the number of megachurchs has been growing over the last few years. (Note also that I used a stricter definition of megachurch than is often used. 2000 regular congregants is often the dividing line, not 3000).
Also, I strongly suspect that changing attitudes towards homosexuality in the US plays a role, although I don’t have a precise understanding of how I expect that to work.
Speaking roughly: my intuition is that when X is bad, a lot of people who have minor suspicions that someone they trust is doing X are motivated to pursue those suspicions, but when X is really really bad, those same people are instead motivated to not think about their suspicions. And I think the ratio of people who think homosexuality is really really bad to those who merely think it’s bad is decreasing.
Not to mention, of course, that illicit relationships by their nature can’t be kept secret from everyone—the other person in the relationship has to know—and the more acceptable the class of relationship becomes in the broader community the easier it is for the other person to reveal it.
I wonder if itemised bills for rentboys etc have turned up on any megachurch’s accounts. That would make for an amusing, if not very surprising, document leak.
General AI will not be made in 2011. Confidence: 90%.
The removal of DADT by the US military will result in fewer than 300 soldiers leaving the military in protest. (Note that this may be hard to measure.) Confidence: 95%.
The Riemann Hypothesis will not be proven. Confidence: 75%. (Minor note for the nitpickers: relevant foundational system is ZFC.)
Ryan Williams recent bound on ACC circuits of NEXP ((See here for a discussion of Williams work)) will be tightened in at least one of three ways: The result will be shown to apply for some smaller set of problems than NEXP, the result will be improved for some broader type of circuit than ACC, or the bound on the circuit size ruled out will be improved. Confidence: 60%
At least one head pastor of a Protestant megachurch in the US will be found to be engaging in homosexual activity. For purposes of this prediction “megachurch” means a church with regular attendance of 3000 people at Sunday services. Confidence: 70%.
Clashes between North Korea and South Korea will result in fatalities: Confidence 80%.
I hope to hell you’re underconfident about that.
Would you classify MC-AIXI as a General AI?
Given Vast quantities of computing power, it would qualify as a very silly AGI which will eventually try dropping an anvil on its own head just to see what happens.
Roughly, no.
Nice failsafe if it unexpectedly goes “foom” though.
No.
I’m definitely willing to put $300 to $100 on Riemann not being proved—even knowing that you may have some relevant insider knowledge.
I guess one issue is that we couldn’t expect to know for sure by this time next year if it had been proved or not—I’d be willing to go with something like “by January 1 2012, no preprint will exist of an essentially correct proof of the Riemann Hypothesis (in ZFC)”.
Yeah, multiple people have called me out on RH, and thinking about it more, I’m probably was just being way too underconfident.
GAI: http://predictionbook.com/predictions/2092
DADT: http://predictionbook.com/predictions/2093 (assuming year timescale)
Riemann: http://predictionbook.com/predictions/2094
Ryan Williams: http://predictionbook.com/predictions/2095 (I, uh, hope you’ll be judging that one; I don’t follow complexity theory work as closely as you obviously do.)
Sex scandal: http://predictionbook.com/predictions/2096
Korea: http://predictionbook.com/predictions/2097 (when judging I assume any unilateral attack counts even if the other side doesn’t retaliate, like the Cheonan)
Regarding 4, there are a fair number of people here interested in complexity theory issues so it shouldn’t be that hard to get people to judge that. Also note that I deliberately made the question more precise by listing three of the more plausible ways the result might be extended rather than just that it would be tightened. That helps make the question clear cut (if it were generalized to anything that could be reasonably construed as a tightening of his result I’d bump the probability up but it would be much trickier to judge the result.)
I think your confidence for the Riemann Hypothesis not being proven is way too low. Unless there has been some major recent improvement I’m unaware of, next year doesn’t look too much better than any recent year. In addition to the apparent improbability of being solved in any particular year, I would suspect that a problem of this magnitude would show some more significant cracks before being solved.
If I thought there was anything like a 10% of AGI in the next year, my priorities would be radically different.
A lot of people in this thread have said that I’m way too underconfident about RH, and thinking about that, they are probably right. At this point, there are two “obviously false” statements that we can’t even disprove:
1) There are zeros of the zeta function arbitrarily close to the line Re s =1.
2) A positive fraction of the non-trivial zeros lie off the line (with the zeros ordered in the obvious way by the size of the imaginary part).
We also can’t prove the related Lindelof hypothesis which is an easy consequence of the Riemann hypothesis.
All three of these are things that one would expect to be done before RH is proved, and we don’t seem very close to resolving any of these with the possible exception of statement 2. There’s some reason to believe that 2 might be disproven from further tightening of Hardy and Littlewood type results in the same way that Levinson and Conrey showed that at least two fifths of the zeros lie on the line (I lack the technical knowledge to evaluate the plausibility that Conrey’s type of result can be further tightened, although the fact that his result in the tightest known form has stood for about 20 years suggests that further tightening is very non-trivial.) Note that we can prove the slightly weaker statement that almost all the zeros lie with in epsilon of the 1⁄2 line, which is sort of a hybrid of the negations of 1 and 2, but that result is about a hundred years old (one thing I actually wish I understood more is how if at all that result connects to the Hardy and Littlewood type results. My impression is that they really don’t in any obvious way but I’m not sure.) Analogs of RH have also been proven in many different contexts (including the appropriate analogs for finite fields and for certain p-adic analogs); I don’t know much about most of those, but it seems like those results don’t give obvious hints for how to resolve the original case in any useful fashion.
I guess the real reason for my estimate is that there seem to be so many promising techniques for approaching the problem that I don’t understand much in detail, such as using operator theory, or connecting the location of the zeros to the behavior of some quasicrystals.
I was almost certainly underconfident in my above estimate, and I think everyone was very right to call me out on that, so I won’t be taking any bets on it. I’m not sure how to revise it. Upwards is clearly the correct direction, but thinking about the probability estimate in more details leaves me feeling very confused.
GAI & Riemann look underconfident to me.
How confident are you, that they are underconfident? Each one?
My estimates are 95% Riemann in a year, 97.5% GAI in a year
This looks rather odd with no context quoted! :P
I bet Eliezer really, really, really hopes I’m overconfident about THAT one.
Missing NOTs!
I like this. In other words it’s over 70% that AGI will be invented before 2020.
No. That calculation assumes independent probabilities for each year.
So, it’s not the prediction for the next year only, but for a longer period.
Interesting.
No. One model that could produce that prediction for next year is that one is 89% confident that GAI is impossible, and 90% confident that if possible, it will be discovered next year. Strange beliefs, yes.
But they give 10% chance this year and 11% in the next ten.
Seems underconfident. Shane Legg, who is an AGI researcher, predicts:
Although it really depends on what you mean by “General AI.”
How representative is Legg’s prediction of people in that field? It “feels” very optimistic, but I don’t have the relevant expertise. What do other researchers think?
My page has some graphs:
http://alife.co.uk/essays/how_long_before_superintelligence/
Ben covered this issue in a 2010 paper:
How long until human-level AI? Results from an expert assessment
Anecdotal evidence suggests it assigns way more probability mass to the next few decades (leaving almost none for subsequent development, social collapse, etc) than the vast majority of researchers. Surveys have been conducted among folk strongly selected for belief in AI (e.g. at Ben’s AGI conference linked below, on transhumanist websites, etc) that put a big chunk of probability in that period, but usually not as much as Legg.
Unfortunately, there aren’t yet any representative expert polls, and it would be hard to get an expert class that was expert on neuroscience, AI, and outside factors that could speed up progress (e.g. biotech enhancement). Worse, where folk have expressed skeptical views, they have almost always been just negatives with respect to a particular date, rather than probabilities. It seems fairly likely that the median relevant representative expert would assign a probability over 5% but less than 50% to Legg/Kurzweil timelines, particularly if you factored out mysticism/religion-based skepticism.
EDIT: Here’s another survey.
26 of the contributors to the NSF-backed “Managing nano-bio-info-cogno innovations: converging technologies in society” were surveyed on the date at which various technologies were developed, and the median predictions reported in Appendix 1.
Page 344 gives a median estimate of 2085 for AI functionally equivalent to the human brain:
This is handy as a less selected group than attendees at an Artificial General Intelligence conference, although they’re still folk with a professional interest in futuristic technologies generally.
This is helpful. One question, though:
Does this mean “For any given year, a relevant expert would only assign 1/20-1/2 the probability of FOOM by that year that Legg and Kurzweil do”? If not, what does it mean?
Shane Legg says that there is a 95% probability of human-level AI by 2045. Kurzweil doesn’t give probabilities, but claims high confidence in Turing Test passing AI by 2029 and a slow takeoff Singularity over the next two decades. I would bet that a representative sample of experts would assign less than 50% probability to human-level AI by 2045, but more than 5%.
I was surprised, his recent post didn’t leave me with this impression, and I didn’t remember the past well enough. But apparently this is correct, here’s the post and visualization of the prediction endorsed by Legg.
Cool, thanks.
Why do you draw attention to this question and how did you reach this estimate?
Unless you have knowledge of some change (eg, newspaper interest), this seems easy to estimate by the frequency of occurrence. This is a question about the steady state of the world, not about the future (modulo the not terribly rapid change in the number of megachurches). If you care enough to mention this, I’d expect you to care enough to have have some gut feeling estimate for this. In particular, 70% odds for the first such scandal to reach those who haven’t heard about earlier scandals (if any) is absurd (again, unless you know about some change).
Compare this to your prediction about RH!
This question was based on a combination of the base rate and the increasing number of megachurches.
But these scandals do occur. A naive base line rate puts them at around slightly over one every two years. Prior example scandals include Ted Haggard (2006) and Eddie Long (2010). I picked a rate slightly higher than the expected from the historical base rate primarily since the number of megachurchs has been growing over the last few years. (Note also that I used a stricter definition of megachurch than is often used. 2000 regular congregants is often the dividing line, not 3000).
There’s another reason to expect an increasing rate—it’s harder to keep secrets these days.
Yes.
Also, I strongly suspect that changing attitudes towards homosexuality in the US plays a role, although I don’t have a precise understanding of how I expect that to work.
Speaking roughly: my intuition is that when X is bad, a lot of people who have minor suspicions that someone they trust is doing X are motivated to pursue those suspicions, but when X is really really bad, those same people are instead motivated to not think about their suspicions. And I think the ratio of people who think homosexuality is really really bad to those who merely think it’s bad is decreasing.
Not to mention, of course, that illicit relationships by their nature can’t be kept secret from everyone—the other person in the relationship has to know—and the more acceptable the class of relationship becomes in the broader community the easier it is for the other person to reveal it.
I wonder if itemised bills for rentboys etc have turned up on any megachurch’s accounts. That would make for an amusing, if not very surprising, document leak.