Here are some of mine. These are very rough and I could probably be persuaded on many of them to move them significantly in some direction.
By 2030 (and after January 1st, 2020),
No high-level AGI, defined as a single system that can perform nearly every economically valuable task more cheaply than a human, will have been created. 94%
No robot hand will be able to manipulate a Rubik’s cube as well as a top human. 80%
No state will secede from the US. 95%
No language model will write a book without substantial aid, that ends up on the New York Times bestseller list. 97%
No pandemic will kill >50 million people. 93%
Neither Puerto Rico or DC will be recognized as states. 80%
Tradititional religion will continue to decline in the West, as measured by surveys that track engagement. 85%
Bryan Caplan will lose a bet. 75%
No US President will utter the words “Existential risk” in public during their term as president. 65%
No human will have stepped foot on Mars. 50%
At least one company sells nearly fully autonomous cars, defined as cars that can autonomously perform nearly all tasks that normal drivers accomplish. 80%
Robin Hanson will disagree with the statement, “The rate of automation increased substantially during the 2020s, compared to prior decades.” 85%
Experts will recognize that top computers can reliably beat humans at narrow language benchmarks, such as those on https://super.gluebenchmark.com/. 90%
You predict that it is more likely to have an ai which ” that can perform nearly every economically valuable task more cheaply than a human, will have been created ” than “will write a book without substantial aid, that ends up on the New York Times bestseller list. ”
This seems weird as the first seems very likely to cause the second.
The Rubik’s Cube one strikes me as much more feasible than the other AI predictions. Look at the dexterity improvements of Boston Dynamics over the last decade and apply that to the current robotic hands, and I think there’s a better than 70% chance you get a Rubik’s Cube-spinning, chopsticks-using robotic hand by 2030.
This video didn’t shift my priors that much. The impressive thing in the video is speed and precision, which is trivial for machines, let alone AI. Speed and precision is already there, it just needs to be hooked on to some qualitative breakthrough in application.
My main data point is that I’m not very impressed by OpenAI’s robot hand. It is very impressive relative to what we had 10 years ago, but top humans are extremely adept at manipulating things in their hands.
Regarding “If a survey is performed, most people in the United States will say that curing aging is undesirable. 85%”. One similar survey has already been done. The result depends if you specify that an unlimited lifespan would be in health and not in increasing frailty. If you do, > 40% of respondents opt for unlimited lifespan, otherwise 1%. https://www.frontiersin.org/articles/10.3389/fgene.2015.00353/full
So, superintelligence. I would suggest editing your prediction to say so. They’re not synonymous terms. In fact it is the full expectation that AGI in many architectures would be less efficient without extensive training. AGI is a statement of capability—it can, in principle, solve any problem, not that it does so better than humans.
If AGI just means, “can, in principle, solve any problem” then I think we could already build very very slow AGI right now (at least for all well-defined solutions—you just perform a search over candidate solutions).
Plus, I don’t think my definition matches the definition given by Bostrom.
By a “superintelligence” we mean an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills.
ETA: I edited the original post to be more specific.
Your prediction reads the same as this definition AFAICT, if you substitute “nearly every” for “practically every”, etc.
I think this is an instance of The Illusory Transparency of Words. What you wrote in the prediction probably doesn’t have the interpretation you meant.
We don’t have AGI now because there is a lot hiding behind “at least for all well-defined solutions.” Therein lies the magic.
I think, able to drive on any road that Google Maps has access to, and able to drive in all “normal” weather conditions (some snow, medium amounts of rain). I’m not confident on this, though, and I imagine that it might be a while (>10 years) before autonomous vehicles are truly autonomous (that is, they can drive in any condition that a human would be able to in any context).
“Any road that Google Maps has access to” is a high bar when you consider that that includes the roads in many countries with wildly different driver and pedestrian dynamics than the United States.
No language model will write a book without substantial aid, that ends up on the New York Times bestseller list. 97%
“Essays from the Noosphere: Twelve Artificial Intelligences Reflect on Life, the Universe, and Everything”
This seems to ignore the quite plausible scenario where an AI-written book finds itself a Schelling point for folks who use their bookshelf as a signaling mechanism. Being 100% AI and 0% human would be a boon in that scenario even if the book is a little rough around the edges.
No one will have won a Nobel Prize in Physics for their work on string theory. 80%
and
No US President will utter the words “Existential risk” in public during their term as president. 65%
But this is such that I’d expect that looking into either for a couple of hours would change my mind. For the second one, the Google ngram page for existential risk is interesting, but it sadly only reaches up to the year 2008.
Here are some of mine. These are very rough and I could probably be persuaded on many of them to move them significantly in some direction.
By 2030 (and after January 1st, 2020),
No high-level AGI, defined as a single system that can perform nearly every economically valuable task more cheaply than a human, will have been created. 94%
No robot hand will be able to manipulate a Rubik’s cube as well as a top human. 80%
No state will secede from the US. 95%
No language model will write a book without substantial aid, that ends up on the New York Times bestseller list. 97%
No pandemic will kill >50 million people. 93%
Neither Puerto Rico or DC will be recognized as states. 80%
Tradititional religion will continue to decline in the West, as measured by surveys that track engagement. 85%
Bryan Caplan will lose a bet. 75%
No US President will utter the words “Existential risk” in public during their term as president. 65%
No human will have stepped foot on Mars. 50%
At least one company sells nearly fully autonomous cars, defined as cars that can autonomously perform nearly all tasks that normal drivers accomplish. 80%
Robin Hanson will disagree with the statement, “The rate of automation increased substantially during the 2020s, compared to prior decades.” 85%
Experts will recognize that top computers can reliably beat humans at narrow language benchmarks, such as those on https://super.gluebenchmark.com/. 90%
Kurzweil will lose his bet on Longbets (http://longbets.org/1/). 55%
There will be no convincing evidence of contact from extraterrestrials. 99%
Jeff Bezos will be unseated as the richest person in the world. 70%
Robust mouse rejuvenation, defined as a mouse being rejuvenated so that it lives 2500 days, will not have been demonstrated. 85%
If a survey is performed, most people in the United States will say that curing aging is undesirable. 85%
There will be another economic recession in the United States. 70%
World GDP will be higher than it was in 2019. 97%
No one will have won a Nobel Prize in Physics for their work on string theory. 80%
No war larger than the Syrian Civil War by death count, according to a reputable organization, will have occurred. 65%
Donald Trump will not be convicted of high crimes and misdemeanors in his first term as president. 95%
Donald Trump will serve his entire first term as president. 92%
Donald Trump will win re-election. 55%
Roe v Wade will not be overturned. 70%
No proof that P = NP. 98%
No proof that P != NP. 90%
You predict that it is more likely to have an ai which ” that can perform nearly every economically valuable task more cheaply than a human, will have been created ” than “will write a book without substantial aid, that ends up on the New York Times bestseller list. ”
This seems weird as the first seems very likely to cause the second.
A language model making it onto the NYT’s bestseller list seems like a very specific thing. High level machine intelligence is not.
The Rubik’s Cube one strikes me as much more feasible than the other AI predictions. Look at the dexterity improvements of Boston Dynamics over the last decade and apply that to the current robotic hands, and I think there’s a better than 70% chance you get a Rubik’s Cube-spinning, chopsticks-using robotic hand by 2030.
To help calibrate, watch this video.
This video didn’t shift my priors that much. The impressive thing in the video is speed and precision, which is trivial for machines, let alone AI. Speed and precision is already there, it just needs to be hooked on to some qualitative breakthrough in application.
I’m willing to bet on this prediction.
How are you defining “hand”?
Obviously this beats humans for speed but I guess you’re thinking of something which is general purpose and Rubik’s cube is just a test of dexterity?
By hand I mean anything that closely resembles a human hand.
My main data point is that I’m not very impressed by OpenAI’s robot hand. It is very impressive relative to what we had 10 years ago, but top humans are extremely adept at manipulating things in their hands.
Regarding “If a survey is performed, most people in the United States will say that curing aging is undesirable. 85%”. One similar survey has already been done. The result depends if you specify that an unlimited lifespan would be in health and not in increasing frailty. If you do, > 40% of respondents opt for unlimited lifespan, otherwise 1%. https://www.frontiersin.org/articles/10.3389/fgene.2015.00353/full
Well, even people working on AGI don’t think that is a possibility. I think the word you are looking for is “superintelligence” not AGI.
I’m using a slighly modified definition given by Grace et al. for high level machine intelligence.
So, superintelligence. I would suggest editing your prediction to say so. They’re not synonymous terms. In fact it is the full expectation that AGI in many architectures would be less efficient without extensive training. AGI is a statement of capability—it can, in principle, solve any problem, not that it does so better than humans.
If AGI just means, “can, in principle, solve any problem” then I think we could already build very very slow AGI right now (at least for all well-defined solutions—you just perform a search over candidate solutions).
Plus, I don’t think my definition matches the definition given by Bostrom.
ETA: I edited the original post to be more specific.
Your prediction reads the same as this definition AFAICT, if you substitute “nearly every” for “practically every”, etc.
I think this is an instance of The Illusory Transparency of Words. What you wrote in the prediction probably doesn’t have the interpretation you meant.
We don’t have AGI now because there is a lot hiding behind “at least for all well-defined solutions.” Therein lies the magic.
There are some unspecified parameters here. Do you mean autonomous cars that are …
region-locked, or able to drive anywhere?
able to drive in all weather conditions, or limited to only some?
I think, able to drive on any road that Google Maps has access to, and able to drive in all “normal” weather conditions (some snow, medium amounts of rain). I’m not confident on this, though, and I imagine that it might be a while (>10 years) before autonomous vehicles are truly autonomous (that is, they can drive in any condition that a human would be able to in any context).
“Any road that Google Maps has access to” is a high bar when you consider that that includes the roads in many countries with wildly different driver and pedestrian dynamics than the United States.
“Essays from the Noosphere: Twelve Artificial Intelligences Reflect on Life, the Universe, and Everything”
This seems to ignore the quite plausible scenario where an AI-written book finds itself a Schelling point for folks who use their bookshelf as a signaling mechanism. Being 100% AI and 0% human would be a boon in that scenario even if the book is a little rough around the edges.
That’s a good point, but it doesn’t reduce my credence much. Perhaps 94% or 95% is more appropriate? I’d be willing to bet on this.
I think you might have an inflated sense of how hard it is to get on the NYT bestseller list. Just go a little bit viral for one week and you’re done. https://www.vox.com/culture/2017/9/13/16257084/bestseller-lists-explained
This seems underconfident?
I have different intuitions for both:
and
But this is such that I’d expect that looking into either for a couple of hours would change my mind. For the second one, the Google ngram page for existential risk is interesting, but it sadly only reaches up to the year 2008.