Thanks. I agree that in the usual case, the non-releases should cause updates in one direction and releases in the other. But in this case, everyone expected GPT-4 around February (or at least I did, and I’m a nobody who just follows some people on twitter), and it was released roughly on schedule (especially if you count Bing), so we can just do a simple update on how impressive we think it is compared to expectations.
Other times where I think people ought to have updated towards longer timelines, but didn’t:
Self-driving cars. Around 2015-2016, it was common knowledge that truck drivers would be out of a job within 3-5 years. Most people here likely believed it, even if it sounds really stupid in retrospect (people often forget what they used to believe). I had several discussions with people expecting fully self-driving cars by 2018.
Alpha-Star. When Alpha-star first came out, it was claimed to be superhuman at Starcraft. After fixing an issue with how it clicks in a superhuman way, Alpha-star was no longer superhuman at Starcraft, and to this day there’s no bot that is superhuman at Starcraft. Generally, people updated the first time (Starcraft solved!) and never updated back when it turned out to be wrong.
That time when OpenAI tried really hard to train an AI to do formal mathematical reasoning and still failed to solve IMO problems (even when translated to formal mathematics and even when the AI was given access to a brute force algebra solver). Somehow people updated towards shorter timelines even though to me this looked like negative evidence (it just seemed like a failed attempt).
Self-driving cars. Around 2015-2016, it was common knowledge that truck drivers would be out of a job within 3-5 years. Most people here likely believed it, even if it sounds really stupid in retrospect (people often forget what they used to believe). I had several discussions with people expecting fully self-driving cars by 2018.
This doesn’t match my experience. I can only speak for groups like “researchers in theoretical computer science,” “friends from MIT,” and “people I hang out with at tech companies,” but at least within those groups people were much more conservative. You may have been in different circles, but it clearly wasn’t common knowledge that self-driving cars were coming soon (and certainly this was not the prevailing view of people I talked with who worked on the problem).
In 2016 I gave around a 60% chance of self-driving cars good enough to operate a ride-hailing service in ~10 large US cities by mid 2023 (with enough coverage to work for ~half of commutes within the city). I made a number of bets about this proposition at 50-50 odds between 2016 and 2018.
I generally found a lot of people who were skeptical and pretty few people who were more optimistic than I was. (Though I did make a bet on the other side with someone who assigned >10% chance to self-driving car ride-hailing person in SF within 2 years.) The point of these bets was mostly to be clear about my views at the time and the views of others, and indeed I feel like the issue is getting distorted somewhat with hindsight and it’s helpful to have the quantitative record.
I had similar experiences earlier; I first remember discussing this issue with theoretical computer science researchers at a conference in 2012, where my outlook of “more likely than not within a few decades” was contrarian.
In 2018 analysts put the market value of Waymo LLC, then a subsidiary of Alphabet Inc., at $175 billion. Its most recent funding round gave the company an estimated valuation of $30 billion, roughly the same as Cruise. Aurora Innovation Inc., a startup co-founded by Chris Urmson, Google’s former autonomous-vehicle chief, has lost more than 85% since last year [i.e. 2021] and is now worth less than $3 billion. This September a leaked memo from Urmson summed up Aurora’s cash-flow struggles and suggested it might have to sell out to a larger company. Many of the industry’s most promising efforts have met the same fate in recent years, including Drive.ai, Voyage, Zoox, and Uber’s self-driving division. “Long term, I think we will have autonomous vehicles that you and I can buy,” says Mike Ramsey, an analyst at market researcher Gartner Inc. “But we’re going to be old.”
It certainly sounds like there was an update by the industry towards longer AI timelines!
Also, I bought a new car in 2018, and I worried at the time about the resale value (because it seemed likely self-driving cars would be on the market in 3-5 years, when I was likely to sell). That was a common worry, I’m not weird, I feel like I was even on the skeptical side if anything.
Someone on either LessWrong or SSC offered to bet me that self-driving cars would be on the market by 2018 (I don’t remember what the year was at the time -- 2014?)
Every year since 2014, Elon Musk promised self-driving cars within a year or two. (Example source: https://futurism.com/video-elon-musk-promising-self-driving-cars) Elon Musk is a bit of a joke now, but 5 years ago he was highly respected in many circles, including here on LessWrong.
Thanks. I agree that in the usual case, the non-releases should cause updates in one direction and releases in the other. But in this case, everyone expected GPT-4 around February (or at least I did, and I’m a nobody who just follows some people on twitter), and it was released roughly on schedule (especially if you count Bing), so we can just do a simple update on how impressive we think it is compared to expectations.
Other times where I think people ought to have updated towards longer timelines, but didn’t:
Self-driving cars. Around 2015-2016, it was common knowledge that truck drivers would be out of a job within 3-5 years. Most people here likely believed it, even if it sounds really stupid in retrospect (people often forget what they used to believe). I had several discussions with people expecting fully self-driving cars by 2018.
Alpha-Star. When Alpha-star first came out, it was claimed to be superhuman at Starcraft. After fixing an issue with how it clicks in a superhuman way, Alpha-star was no longer superhuman at Starcraft, and to this day there’s no bot that is superhuman at Starcraft. Generally, people updated the first time (Starcraft solved!) and never updated back when it turned out to be wrong.
That time when OpenAI tried really hard to train an AI to do formal mathematical reasoning and still failed to solve IMO problems (even when translated to formal mathematics and even when the AI was given access to a brute force algebra solver). Somehow people updated towards shorter timelines even though to me this looked like negative evidence (it just seemed like a failed attempt).
This doesn’t match my experience. I can only speak for groups like “researchers in theoretical computer science,” “friends from MIT,” and “people I hang out with at tech companies,” but at least within those groups people were much more conservative. You may have been in different circles, but it clearly wasn’t common knowledge that self-driving cars were coming soon (and certainly this was not the prevailing view of people I talked with who worked on the problem).
In 2016 I gave around a 60% chance of self-driving cars good enough to operate a ride-hailing service in ~10 large US cities by mid 2023 (with enough coverage to work for ~half of commutes within the city). I made a number of bets about this proposition at 50-50 odds between 2016 and 2018.
I generally found a lot of people who were skeptical and pretty few people who were more optimistic than I was. (Though I did make a bet on the other side with someone who assigned >10% chance to self-driving car ride-hailing person in SF within 2 years.) The point of these bets was mostly to be clear about my views at the time and the views of others, and indeed I feel like the issue is getting distorted somewhat with hindsight and it’s helpful to have the quantitative record.
I had similar experiences earlier; I first remember discussing this issue with theoretical computer science researchers at a conference in 2012, where my outlook of “more likely than not within a few decades” was contrarian.
That definitely sounds like a contrarian viewpoint in 2012, but surely not by 2016-2018.
Look at this from Nostalgebraist:
https://nostalgebraist.tumblr.com/post/710106298866368512/oakfern-replied-to-your-post-its-going-to-be
which includes the following quote:
It certainly sounds like there was an update by the industry towards longer AI timelines!
Also, I bought a new car in 2018, and I worried at the time about the resale value (because it seemed likely self-driving cars would be on the market in 3-5 years, when I was likely to sell). That was a common worry, I’m not weird, I feel like I was even on the skeptical side if anything.
Someone on either LessWrong or SSC offered to bet me that self-driving cars would be on the market by 2018 (I don’t remember what the year was at the time -- 2014?)
Every year since 2014, Elon Musk promised self-driving cars within a year or two. (Example source: https://futurism.com/video-elon-musk-promising-self-driving-cars) Elon Musk is a bit of a joke now, but 5 years ago he was highly respected in many circles, including here on LessWrong.