I never once claimed the current trend is not concerning. You’re repeatedly switching topics to this!
It is you and Charlotte who brought up the Turing test, not me. I didn’t even mention it until Charlotte, out of nowhere, told me she passes it (then I merely told her she doesn’t). I’m glad you agree she doesn’t pass it. I was disturbed to hear both you, and Charlotte, and many people here, pretend that the current (stupid) chatbots pass the Turing test. They just don’t, and it’s not close.
Maybe we all die tomorrow! That doesn’t change the fact that Charlotte does not pass the Turing test, nor the fact that she does not say sensible things even when I’m not running a Turing test and merely asking her if she’s sentient.
The goal posts keep moving here.
To complete the test, do please ask this question about ice cube pendulum to a few nearby children and let us know if they all answer perfectly. Do not use hand gestures to explain how the pendulum moves.
I mean, a 5 year old won’t be able to answer it, so it depends what age you mean by a child. But there’s a few swinging pendulums in my local science museum; I think you’re underestimating children, here, though it’s possible my phrasing is not clear enough.
By the way, I asked the same question of ChatGPT, and it gave the correct answer:
I just tried chatGPT 10 times. It said “line” 3⁄10 times. Of those 3 times, 2 of them said the line would be curved (wrong, though a human might say that as well). The other 7 times were mostly on “ellipse” or “irregular shape” (which are not among the options), but “circle” appeared as well. Note that if chatGPT guessed randomly among the options, it would get it right 2.5/10 times.
It’s perhaps not the best test of geometric reasoning, because it’s difficult for humans to understand the setup. It was only my first thought; I can try to look up what Gary Markus recommends instead, I guess. In any event, you are wrong if you claim that current LLMs can solve it. I would actually make a bet that GPT4 will also fail this. (But again, it’s not the best test of geometric reasoning, so maybe we should bet on a different example of geometric reasoning.) It is very unlikely that the GPT architecture causes anything like a 3d world model to form inside the neural net (after all, GPT never sees images). Therefore, any test of geometric reasoning that humans use visualization to solve would be quite tricky for LLMs, borderline impossible.
(Of course, recent LLMs have read the entire internet and have memorized a LOT of facts regarding how 3d objects move, so one needs to be a bit creative in coming up with a question outside its training set.)
Edit: Just tried a 2d reasoning prompt with some simple geometry, and chatGPT failed it 5⁄5 times. I think generating such prompts is reasonably easy, but I concede that a 5 year old cannot solve any of them (5 year olds really don’t know much...)
I just tried chatGPT 10 times. It said “line” 3⁄10 times. Of those 3 times, 2 of them said the line would be curved (wrong, though a human might say that as well). The other 7 times were mostly on “ellipse” or “irregular shape” (which are not among the options), but “circle” appeared as well. Note that if chatGPT guessed randomly among the options, it would get it right 2.5/10 times.
It’s perhaps not the best test of geometric reasoning, because it’s difficult for humans to understand the setup.
Doesn’t prompt to think step by step help in this case?
Not particularly, no. There are two reasons: (1) RLHF already tries to encourage the model to think step-by-step, which is why you often get long-winded multi-step answers to even simple arithmetic questions. (2) Thinking step by step only helps for problems that can be solved via easier intermediate steps. For example, solving “2x+5=5x+2” can be achieved via a sequence of intermediate steps; the model generally cannot solve such questions with a single forward pass, but it can do every intermediate step in a single forward pass each, so “think step by step” helps it a lot. I don’t think this applies to the ice cube question.
But again, it’s not the best test of geometric reasoning, so maybe we should bet on a different example of geometric reasoning.
If you are willing to generate a list of 4-10 other such questions of similar difficulty, I’m willing to take a bet wherein I get $X for each question of those GPT-4 gets right with probability > 0.5, and you get $X for each question GPT-4 gets wrong with probability ≥ 0.5, where X ≤ 30.
(I don’t actually endorse bets where you get money only in worlds where money is worth less in expectation, but I do endorse specific predictions and am willing to pay that here if I’m wrong.)
Of similar difficulty to which question? The ice cube one? I’ll take the bet—that one is pretty hard. I’d rather do it with fake money or reputation, though, since the hassle of real money is not worth so few dollars (e.g. I’m anonymous here).
If you mean the “intersection points between a triangle and a circle”, I won’t take that bet—I chose that question to be easy, not to be hard (I had to test a few easy questions to find one that chatGPT gets consistently wrong). I expect GPT4 will be able to solve “max number of intersection points between a circle and a triangle”, but I expect it not to be able to solve questions on the level of the ice cube one (though the ice cube one specifically seems like a bit of a bad question, since so many people have contested the intended answer).
In any case, coming up with 4-10 good questions is a bit time consuming, so I’ll have to come back to that.
Either was fine. I didn’t realize you expected GPT-4 will be able to solve the latter, which makes this less interesting to me, but I also intended not to fuss over the details.
I just want to note that ChatGPT-4 cannot solve the ice cube question, like I predicted, but can solve the “intersection points between a triangle and a circle” question, also like I predicted.
I assume GPT-4 did not meet your expectations and you are updating towards longer timelines, given it cannot solve a question you thought it would be able to solve?
I’ll know how I want to judge it better after I have more data points. I have a page of questions I plan to ask at some point.
With regards to this update specifically, recall both that I thought you thought it would fail the intersection points question when I offered the bet, and that I specifically asked for a reduced-variance version of the bet. Those should tell you something about my probabilities going into this.
Fair enough. I look forward to hearing how you judge it after you’ve asked your questions.
I think people on LW (though not necessarily you) have a tendency to be maximally hype/doomer regarding AI capabilities and to never update in the direction of “this was less impressive than I expected, let me adjust my AI timelines to be longer”. Of course, that can’t be rational, due to the Conservation of Expected Evidence, which (roughly speaking) says you should be equally likely to update in either direction. Yet I don’t think I’ve ever seen any rationalist ever say “huh, that was less impressive than I expected, let me update backwards”. I’ve been on the lookout for this for a while now; if you see someone saying this (about any AI advancement or lack thereof), let me know.
Ah, well it seems to me that this is mostly people being miscalibrated before GPT-3 hit them over the head about it (and to a lesser extent, even then). You should be roughly likely to update in either direction only in expectation over possible observations. Even if you are immensely calibrated, you should still also a priori expect to have shortening updates around releases and lengthening updates around non-releases, since both worlds have nonzero probability.
But if you’d appreciate a tale of over-expectations, my modal timeline gradually grew for a good while after this conversation with gwern (https://twitter.com/gwern/status/1319302204814217220), where I was thinking people were being slower about this than I expected and meta-updating towards the gwern position.
Alas, recent activity has convinced me my original model was right, it just had too small constant factors for ‘how much longer does stuff take in reality than it feels like it should take?’ Most of my timeline-shortening updates since GPT-3 have been like this: “whelp, I guess my modal models weren’t wrong, there goes the tail probability I was hoping for.”
Another story would be my update toward alignment conservatism, mostly by updating on the importance of a few fundamental model properties, combined with some empirical evidence being non-pessimal. Pretraining has the powerful property that the model doesn’t have influence over its reward, which avoids a bunch of reward hacking incentives, and I didn’t update on that properly until I thought it through, though idk of anyone doing anything clever with the insight yet. Alas this is big on a log scale but small on an absolute one.
Thanks. I agree that in the usual case, the non-releases should cause updates in one direction and releases in the other. But in this case, everyone expected GPT-4 around February (or at least I did, and I’m a nobody who just follows some people on twitter), and it was released roughly on schedule (especially if you count Bing), so we can just do a simple update on how impressive we think it is compared to expectations.
Other times where I think people ought to have updated towards longer timelines, but didn’t:
Self-driving cars. Around 2015-2016, it was common knowledge that truck drivers would be out of a job within 3-5 years. Most people here likely believed it, even if it sounds really stupid in retrospect (people often forget what they used to believe). I had several discussions with people expecting fully self-driving cars by 2018.
Alpha-Star. When Alpha-star first came out, it was claimed to be superhuman at Starcraft. After fixing an issue with how it clicks in a superhuman way, Alpha-star was no longer superhuman at Starcraft, and to this day there’s no bot that is superhuman at Starcraft. Generally, people updated the first time (Starcraft solved!) and never updated back when it turned out to be wrong.
That time when OpenAI tried really hard to train an AI to do formal mathematical reasoning and still failed to solve IMO problems (even when translated to formal mathematics and even when the AI was given access to a brute force algebra solver). Somehow people updated towards shorter timelines even though to me this looked like negative evidence (it just seemed like a failed attempt).
Self-driving cars. Around 2015-2016, it was common knowledge that truck drivers would be out of a job within 3-5 years. Most people here likely believed it, even if it sounds really stupid in retrospect (people often forget what they used to believe). I had several discussions with people expecting fully self-driving cars by 2018.
This doesn’t match my experience. I can only speak for groups like “researchers in theoretical computer science,” “friends from MIT,” and “people I hang out with at tech companies,” but at least within those groups people were much more conservative. You may have been in different circles, but it clearly wasn’t common knowledge that self-driving cars were coming soon (and certainly this was not the prevailing view of people I talked with who worked on the problem).
In 2016 I gave around a 60% chance of self-driving cars good enough to operate a ride-hailing service in ~10 large US cities by mid 2023 (with enough coverage to work for ~half of commutes within the city). I made a number of bets about this proposition at 50-50 odds between 2016 and 2018.
I generally found a lot of people who were skeptical and pretty few people who were more optimistic than I was. (Though I did make a bet on the other side with someone who assigned >10% chance to self-driving car ride-hailing person in SF within 2 years.) The point of these bets was mostly to be clear about my views at the time and the views of others, and indeed I feel like the issue is getting distorted somewhat with hindsight and it’s helpful to have the quantitative record.
I had similar experiences earlier; I first remember discussing this issue with theoretical computer science researchers at a conference in 2012, where my outlook of “more likely than not within a few decades” was contrarian.
In 2018 analysts put the market value of Waymo LLC, then a subsidiary of Alphabet Inc., at $175 billion. Its most recent funding round gave the company an estimated valuation of $30 billion, roughly the same as Cruise. Aurora Innovation Inc., a startup co-founded by Chris Urmson, Google’s former autonomous-vehicle chief, has lost more than 85% since last year [i.e. 2021] and is now worth less than $3 billion. This September a leaked memo from Urmson summed up Aurora’s cash-flow struggles and suggested it might have to sell out to a larger company. Many of the industry’s most promising efforts have met the same fate in recent years, including Drive.ai, Voyage, Zoox, and Uber’s self-driving division. “Long term, I think we will have autonomous vehicles that you and I can buy,” says Mike Ramsey, an analyst at market researcher Gartner Inc. “But we’re going to be old.”
It certainly sounds like there was an update by the industry towards longer AI timelines!
Also, I bought a new car in 2018, and I worried at the time about the resale value (because it seemed likely self-driving cars would be on the market in 3-5 years, when I was likely to sell). That was a common worry, I’m not weird, I feel like I was even on the skeptical side if anything.
Someone on either LessWrong or SSC offered to bet me that self-driving cars would be on the market by 2018 (I don’t remember what the year was at the time -- 2014?)
Every year since 2014, Elon Musk promised self-driving cars within a year or two. (Example source: https://futurism.com/video-elon-musk-promising-self-driving-cars) Elon Musk is a bit of a joke now, but 5 years ago he was highly respected in many circles, including here on LessWrong.
‘how much longer does stuff take in reality than it feels like it should take?’
This is the best argument against a lot of the fast takeoff stories that I’ve seen, and it’s probably one of the big failure modes of intellectuals to underestimate how much time things take in reality as opposed to their heads.
Note that there are several phases of takeoff. We have the current ramp of human efforts into AI which is accelerating results. We have AI potentially self improving, which is already in use in gpt-4. (See the rrbm rubrics where the model grades itself and this is used for RL learning)
And then we have a “pause” where the models have self improved to the limits of either data, compute, or robotics capacity. I expect this to happen before 2030.
But the pause is misleading. If every year the existing robotics fleet is used to add just 10 percent more to itself, or add just 10 percent more high quality scientific data or human interaction data to the existing corpus, or build 10 percent more compute, this is a hard exponential process.
It will not slow down until the solar system is consumed. (The slow down from there being obviously the speed of light)
I never once claimed the current trend is not concerning. You’re repeatedly switching topics to this!
It is you and Charlotte who brought up the Turing test, not me. I didn’t even mention it until Charlotte, out of nowhere, told me she passes it (then I merely told her she doesn’t). I’m glad you agree she doesn’t pass it. I was disturbed to hear both you, and Charlotte, and many people here, pretend that the current (stupid) chatbots pass the Turing test. They just don’t, and it’s not close.
Maybe we all die tomorrow! That doesn’t change the fact that Charlotte does not pass the Turing test, nor the fact that she does not say sensible things even when I’m not running a Turing test and merely asking her if she’s sentient.
The goal posts keep moving here.
I mean, a 5 year old won’t be able to answer it, so it depends what age you mean by a child. But there’s a few swinging pendulums in my local science museum; I think you’re underestimating children, here, though it’s possible my phrasing is not clear enough.
I just tried chatGPT 10 times. It said “line” 3⁄10 times. Of those 3 times, 2 of them said the line would be curved (wrong, though a human might say that as well). The other 7 times were mostly on “ellipse” or “irregular shape” (which are not among the options), but “circle” appeared as well. Note that if chatGPT guessed randomly among the options, it would get it right 2.5/10 times.
It’s perhaps not the best test of geometric reasoning, because it’s difficult for humans to understand the setup. It was only my first thought; I can try to look up what Gary Markus recommends instead, I guess. In any event, you are wrong if you claim that current LLMs can solve it. I would actually make a bet that GPT4 will also fail this. (But again, it’s not the best test of geometric reasoning, so maybe we should bet on a different example of geometric reasoning.) It is very unlikely that the GPT architecture causes anything like a 3d world model to form inside the neural net (after all, GPT never sees images). Therefore, any test of geometric reasoning that humans use visualization to solve would be quite tricky for LLMs, borderline impossible.
(Of course, recent LLMs have read the entire internet and have memorized a LOT of facts regarding how 3d objects move, so one needs to be a bit creative in coming up with a question outside its training set.)
Edit: Just tried a 2d reasoning prompt with some simple geometry, and chatGPT failed it 5⁄5 times. I think generating such prompts is reasonably easy, but I concede that a 5 year old cannot solve any of them (5 year olds really don’t know much...)
Doesn’t prompt to think step by step help in this case?
Not particularly, no. There are two reasons: (1) RLHF already tries to encourage the model to think step-by-step, which is why you often get long-winded multi-step answers to even simple arithmetic questions. (2) Thinking step by step only helps for problems that can be solved via easier intermediate steps. For example, solving “2x+5=5x+2” can be achieved via a sequence of intermediate steps; the model generally cannot solve such questions with a single forward pass, but it can do every intermediate step in a single forward pass each, so “think step by step” helps it a lot. I don’t think this applies to the ice cube question.
If you are willing to generate a list of 4-10 other such questions of similar difficulty, I’m willing to take a bet wherein I get $X for each question of those GPT-4 gets right with probability > 0.5, and you get $X for each question GPT-4 gets wrong with probability ≥ 0.5, where X ≤ 30.
(I don’t actually endorse bets where you get money only in worlds where money is worth less in expectation, but I do endorse specific predictions and am willing to pay that here if I’m wrong.)
Of similar difficulty to which question? The ice cube one? I’ll take the bet—that one is pretty hard. I’d rather do it with fake money or reputation, though, since the hassle of real money is not worth so few dollars (e.g. I’m anonymous here).
If you mean the “intersection points between a triangle and a circle”, I won’t take that bet—I chose that question to be easy, not to be hard (I had to test a few easy questions to find one that chatGPT gets consistently wrong). I expect GPT4 will be able to solve “max number of intersection points between a circle and a triangle”, but I expect it not to be able to solve questions on the level of the ice cube one (though the ice cube one specifically seems like a bit of a bad question, since so many people have contested the intended answer).
In any case, coming up with 4-10 good questions is a bit time consuming, so I’ll have to come back to that.
Either was fine. I didn’t realize you expected GPT-4 will be able to solve the latter, which makes this less interesting to me, but I also intended not to fuss over the details.
I just want to note that ChatGPT-4 cannot solve the ice cube question, like I predicted, but can solve the “intersection points between a triangle and a circle” question, also like I predicted.
I assume GPT-4 did not meet your expectations and you are updating towards longer timelines, given it cannot solve a question you thought it would be able to solve?
I’ll know how I want to judge it better after I have more data points. I have a page of questions I plan to ask at some point.
With regards to this update specifically, recall both that I thought you thought it would fail the intersection points question when I offered the bet, and that I specifically asked for a reduced-variance version of the bet. Those should tell you something about my probabilities going into this.
Fair enough. I look forward to hearing how you judge it after you’ve asked your questions.
I think people on LW (though not necessarily you) have a tendency to be maximally hype/doomer regarding AI capabilities and to never update in the direction of “this was less impressive than I expected, let me adjust my AI timelines to be longer”. Of course, that can’t be rational, due to the Conservation of Expected Evidence, which (roughly speaking) says you should be equally likely to update in either direction. Yet I don’t think I’ve ever seen any rationalist ever say “huh, that was less impressive than I expected, let me update backwards”. I’ve been on the lookout for this for a while now; if you see someone saying this (about any AI advancement or lack thereof), let me know.
Ah, well it seems to me that this is mostly people being miscalibrated before GPT-3 hit them over the head about it (and to a lesser extent, even then). You should be roughly likely to update in either direction only in expectation over possible observations. Even if you are immensely calibrated, you should still also a priori expect to have shortening updates around releases and lengthening updates around non-releases, since both worlds have nonzero probability.
But if you’d appreciate a tale of over-expectations, my modal timeline gradually grew for a good while after this conversation with gwern (https://twitter.com/gwern/status/1319302204814217220), where I was thinking people were being slower about this than I expected and meta-updating towards the gwern position.
Alas, recent activity has convinced me my original model was right, it just had too small constant factors for ‘how much longer does stuff take in reality than it feels like it should take?’ Most of my timeline-shortening updates since GPT-3 have been like this: “whelp, I guess my modal models weren’t wrong, there goes the tail probability I was hoping for.”
Another story would be my update toward alignment conservatism, mostly by updating on the importance of a few fundamental model properties, combined with some empirical evidence being non-pessimal. Pretraining has the powerful property that the model doesn’t have influence over its reward, which avoids a bunch of reward hacking incentives, and I didn’t update on that properly until I thought it through, though idk of anyone doing anything clever with the insight yet. Alas this is big on a log scale but small on an absolute one.
Thanks. I agree that in the usual case, the non-releases should cause updates in one direction and releases in the other. But in this case, everyone expected GPT-4 around February (or at least I did, and I’m a nobody who just follows some people on twitter), and it was released roughly on schedule (especially if you count Bing), so we can just do a simple update on how impressive we think it is compared to expectations.
Other times where I think people ought to have updated towards longer timelines, but didn’t:
Self-driving cars. Around 2015-2016, it was common knowledge that truck drivers would be out of a job within 3-5 years. Most people here likely believed it, even if it sounds really stupid in retrospect (people often forget what they used to believe). I had several discussions with people expecting fully self-driving cars by 2018.
Alpha-Star. When Alpha-star first came out, it was claimed to be superhuman at Starcraft. After fixing an issue with how it clicks in a superhuman way, Alpha-star was no longer superhuman at Starcraft, and to this day there’s no bot that is superhuman at Starcraft. Generally, people updated the first time (Starcraft solved!) and never updated back when it turned out to be wrong.
That time when OpenAI tried really hard to train an AI to do formal mathematical reasoning and still failed to solve IMO problems (even when translated to formal mathematics and even when the AI was given access to a brute force algebra solver). Somehow people updated towards shorter timelines even though to me this looked like negative evidence (it just seemed like a failed attempt).
This doesn’t match my experience. I can only speak for groups like “researchers in theoretical computer science,” “friends from MIT,” and “people I hang out with at tech companies,” but at least within those groups people were much more conservative. You may have been in different circles, but it clearly wasn’t common knowledge that self-driving cars were coming soon (and certainly this was not the prevailing view of people I talked with who worked on the problem).
In 2016 I gave around a 60% chance of self-driving cars good enough to operate a ride-hailing service in ~10 large US cities by mid 2023 (with enough coverage to work for ~half of commutes within the city). I made a number of bets about this proposition at 50-50 odds between 2016 and 2018.
I generally found a lot of people who were skeptical and pretty few people who were more optimistic than I was. (Though I did make a bet on the other side with someone who assigned >10% chance to self-driving car ride-hailing person in SF within 2 years.) The point of these bets was mostly to be clear about my views at the time and the views of others, and indeed I feel like the issue is getting distorted somewhat with hindsight and it’s helpful to have the quantitative record.
I had similar experiences earlier; I first remember discussing this issue with theoretical computer science researchers at a conference in 2012, where my outlook of “more likely than not within a few decades” was contrarian.
That definitely sounds like a contrarian viewpoint in 2012, but surely not by 2016-2018.
Look at this from Nostalgebraist:
https://nostalgebraist.tumblr.com/post/710106298866368512/oakfern-replied-to-your-post-its-going-to-be
which includes the following quote:
It certainly sounds like there was an update by the industry towards longer AI timelines!
Also, I bought a new car in 2018, and I worried at the time about the resale value (because it seemed likely self-driving cars would be on the market in 3-5 years, when I was likely to sell). That was a common worry, I’m not weird, I feel like I was even on the skeptical side if anything.
Someone on either LessWrong or SSC offered to bet me that self-driving cars would be on the market by 2018 (I don’t remember what the year was at the time -- 2014?)
Every year since 2014, Elon Musk promised self-driving cars within a year or two. (Example source: https://futurism.com/video-elon-musk-promising-self-driving-cars) Elon Musk is a bit of a joke now, but 5 years ago he was highly respected in many circles, including here on LessWrong.
This is the best argument against a lot of the fast takeoff stories that I’ve seen, and it’s probably one of the big failure modes of intellectuals to underestimate how much time things take in reality as opposed to their heads.
Note that there are several phases of takeoff. We have the current ramp of human efforts into AI which is accelerating results. We have AI potentially self improving, which is already in use in gpt-4. (See the rrbm rubrics where the model grades itself and this is used for RL learning)
And then we have a “pause” where the models have self improved to the limits of either data, compute, or robotics capacity. I expect this to happen before 2030.
But the pause is misleading. If every year the existing robotics fleet is used to add just 10 percent more to itself, or add just 10 percent more high quality scientific data or human interaction data to the existing corpus, or build 10 percent more compute, this is a hard exponential process.
It will not slow down until the solar system is consumed. (The slow down from there being obviously the speed of light)