The post starts with the realization that we are actually bottlenecked by data and then proceeds to talk about HW acceleration. Deep learning is in a sense a general paradigm, but so is random search. It is actually quite important to have the necessary scale of both compute and data and right now we are not sure about either of them. Not to mention that it is still not clear whether DL actually leads to anything truly intelligent in a practical sense or whether we will simply have very good token predictors with very limited use.
I don’t actually think we’re bottlenecked by data. Chinchilla represents a change in focus (for current architectures), but I think it’s useful to remember what that paper actually told the rest of the field: “hey you can get way better results for way less compute if you do it this way.”
I feel like characterizing Chinchilla most directly as a bottleneck would be missing its point. It was a major capability gain, and it tells everyone else how to get even more capability gain. There are some data-related challenges far enough down the implied path, but we have no reason to believe that they are insurmountable. In fact, it looks an awful lot like it won’t even be very difficult!
With regards to whether deep learning goes anywhere: in order for this to occupy any significant probability mass, I need to hear an argument for how our current dumb architectures do as much as they do, and why that does not imply near-term weirdness. Like, “large transformers are performing {this type of computation} and using {this kind of information}, which we can show has {these bounds} which happens to include all the tasks it has been tested on, but which will not include more worrisome capabilities because {something something something}.”
The space in which that explanation could exist seems small to me. It makes an extremely strong, specific claim, that just so happens to be about exactly where the state of the art in AI is.
I don’t actually think we’re bottlenecked by data. Chinchilla represents a change in focus (for current architectures), but I think it’s useful to remember what that paper actually told the rest of the field: “hey you can get way better results for way less compute if you do it this way.”
I feel like characterizing Chinchilla most directly as a bottleneck would be missing its point. It was a major capability gain, and it tells everyone else how to get even more capability gain. There are some data-related challenges far enough down the implied path, but we have no reason to believe that they are insurmountable. In fact, it looks an awful lot like it won’t even be very difficult!
Some of my confidence here arises from things that I don’t think would be wise to blab about in public, so my arguments might not be quite as convincing sounding as I’d like, but I’ll give a try.
I wouldn’t quite say it’s not a problem at all, but rather it’s the type of problem that the field is really good at solving. They don’t have to solve ethics or something. They just need to do some clever engineering with the backing of infinite money.
I’d put it at a similar tier of difficulty as scaling up transformers to begin with. That wasn’t nothing! And the industry blew straight through it.
To give some examples that I’m comfortable having in public:
Suppose you stick to text-only training. Could you expand your training sets automatically? Maybe create a higher quality transcription AI and use it to pad your training set using the entirety of youtube?
Maybe you figure out a relatively simple way to extract more juice from a smaller dataset that doesn’t collapse into pathological overfitting.
Maybe you make existing datasets more informative by filtering out sequences that seem to interfere with training.
Maybe you embrace multimodal training where text-only bottlenecks are irrelevant.
Maybe you do it the hard way. What’s a few billion dollars?
(I guess this technically covers my “by the end of this year we’ll see at least one large model making progress on Chinchilla” prediction, though apparently it was up even before my prediction!)
in order for this to occupy any significant probability mass, I need to hear an argument for how our current dumb architectures do as much as they do, and why that does not imply near-term weirdness. Like, “large transformers are performing {this type of computation} and using {this kind of information}, which we can show has {these bounds} which happens to include all the tasks it has been tested on, but which will not include more worrisome capabilities because {something something something}.”
What about: State-of-the-art models with 500+B parameters still can’t do 2-digit addition with 100% reliability. For me, this shows that the models are perhaps learning some associative rules from the data, but there is no sign of intelligence. An intelligent agent should notice how addition works after learning from TBs of data. Associative memory can still be useful, but it’s not really an AGI.
And whatever algorithms this AI is using to go about its reasoning, they’re apparently so simple that the AI can execute them while still struggling on absolutely trivial arithmetic.
WHAT?
Yes, the AI has some blatant holes in its capability. But what we’re seeing is a screaming-hair-on-fire warning that the problems we thought are hard are not hard.
What happens when we just slightly improve our AI architectures to be less dumb?
When will we get robotics results that are not laughable? When “Google put their most advanced AI into a robot brain!!!” (reported on for the third time this year) we got a robot that can deliver a sponge and misplace an empty coke can but not actually clean anything or do anything useful. It’s hard for me to be afraid of a robot that can’t even plug in its own power cable.
I believe that over time we will understand that producing human-like text is not a sign of intelligence. In the past people believed that only intelligent agents are able to solve math equations (naturally, since only people can do it and animals can). Then came computer and they were able to do all kinds of calculations much faster and without errors. However, from our current point of view we now understand that doing math calculations is not really that intelligent and even really simple machines can do that. Chess playing is similar story, we thought that you have to be intelligent, but we found a heuristic to do that really well. People were afraid that chess-algorithm-like machines can be programmed to conquer the world, but from our perspective, that’s a ridiculous proposition.
I believe that text generation will be a similar case. We think that you have to be really intelligent to produce human-like outputs, but in the end with enough data, you can produce something that looks nice and it can even be useful sometimes, but there is no intelligence in there. We will slowly develop an intuition about what are the capabilities of large-scale ML models. I believe that in the future we will think about them as basically a kinda fuzzy databases that we can query with natural language. I don’t think that we will think about them as intelligent agents capable of autonomous actions.
Chess playing is similar story, we thought that you have to be intelligent, but we found a heuristic to do that really well.
You keep distinguishing “intelligence” from “heuristics”, but no one to my knowledge has demonstrated that human intelligence is not itself some set of heuristics. Heuristics are exactly what you’d expect from evolution after all.
So your argument then reduces to a god of the gaps, where we keep discovering some heuristics for an ability that we previously ascribed to intelligence, and the set of capabilities left to “real intelligence” keeps shrinking. Will we eventually be left with the null set, and conclude that humans are not intelligent either? What’s your actual criterion for intelligence that would prevent this outcome?
I believe that fixating on benchmark such as chess etc is ignoring the G part of AGI. Truly intelligent agent should be general at least in the environment he resides in, considering the limitation of its form. E.g. if a robot is physically able to work with everyday object, we might apply Wozniak test and expect that intelligent robot is able to cook a dinner in arbitrary house or do any other task that its form permits.
If we assume that right now we develop purely textual intelligence (without agency, persistent sense of self etc) we might still expect this intelligence to be general. I.e. it is able to solve arbitrary task if it seems reasonable considering its form. In this context for me, an intelligent agent is able to understand common language and act accordingly, e.g. if a question is posed it can provide a truthful answer.
BIG Bench has recently showed us that our current LMs are able to solve some problems, but they are nowhere near general intelligence. They are not able to solve even very simple problems if it actually requires some sort of logical thinking and not only using associative memory, e.g. this is a nice case:
You can see in the Model performance plots section that scaling did not help at all with tasks like these. This is a very simple task, but it was not seen in the training data so the model struggles to solve it and it produces random results. If the LMs start to solve general linguistic problems, then we are actually having intelligent agents at our hand.
In this context for me, an intelligent agent is able to understand common language and act accordingly, e.g. if a question is posed it can provide a truthful answer
Humans regularly fail at such tasks but I suspect you would still consider humans generally intelligent.
In any case, it seems very plausible that whatever decision procedure is behind more general forms of inference, it will very likely fall to the inexorable march of progress we’ve seen thus far.
If it does, the effectiveness of our compute will potentially increase exponentially almost overnight, since you are basically arguing that our current compute is hobbled by an effectively “weak” associative architecture, but that a very powerful architecture is potentially only one trick away.
The real possibility that we are only one trick away from a potentially terrifying AGI should worry you more.
I don’t see any indication of AGI so it does not really worry me at all. The recent scaling research shows that we need non-trivial number of magnitudes more data and compute to match human-level performance on some benchmarks (with a huge caveat that matching a performance on some benchmark might still not produce intelligence). On the other hand, we are all out of data (especially high quality data with some information value, no random product reviews or NSFW subreddit discussions) and our compute options are also not looking that great (Moore’s law is dead, the fact that we are now relying on HW accelerators is not a good thing, it’s a proof that CPU performance scaling is after 70 years no longer a viable option. There are also some physical limitations that we might not be able to break anytime soon.)
I don’t see any indication of AGI so it does not really worry me at all.
Nobody saw any indication of the atomic bomb before it was created. In hindsight would it have been rational to worry?
Your claims about the about the compute and data needed and alleged limits remind me of the fact that Heisenberg actually thought there was no reason to worry because he had miscalculated the amount of U-235 that would be needed. It seems humans are doomed to continue repeating this mistake and underestimating the severity of catastrophic long tails.
Our current paradigm is almost depleted. We are hitting the wall with both data (PaLM uses 780B tokens, there are 3T tokens publicly available, additional Ts can be found in closed systems, but that’s it) and compute (We will soon hit Landauer’s limit so no more exponentially cheaper computation. Current technology is only three orders of magnitude above this limit).
What we currently have is very similar to what we will ultimately be able to achieve with current paradigm. And it is nowhere near AGI. We need to solve either the data problem or the compute problem.
There is no practical possibility of solving the data problem ⇒ We need a new AI paradigm that does not depend on existing big data.
I assume that we are using existing resource nearly optimally and no significantly more powerful AI paradigm will be created until we have significantly more powerful computers. To have more significantly more powerful computers, we need to sidestep Landauer’s limit, e.g. by using reversible computing or other completely different hardware architecture.
There is no indication that such architecture is currently in development and ready to use. It will probably take decades for such architecture to materialize and it is not even clear whether we are able to build such computer with our current technologies.
We will need several technological revolutions before we will be able to increase our compute significantly. This will hamper the development of AI, perhaps indefinitely. We might need significant advances in material science, quantum science etc to be theoretically able to build computers that are significantly better than what we have today. Then, we will need to develop the AI algorithms to run on them and hope that it is finally enough to reach AGI-levels of compute. Even then, it might take additional decades to actually develop the algorithms.
It is goalpost moving. Basically, it says “current models are not really intelligent”. I don’t think there is much disagreement here. And it’s hard to make any predictions based on that.
Also, “Producing human-like text” is not well defined here; even ELIZA may match this definition. Even the current SOTA may not match it because the adversarial Turning Test has not yet been passed.
It’s not goapost moving, it’s the hype that’s moving. People reduce intelligence to arbitrary skills or problems that are currently being solved and then they are let down when they find out that the skill was actually not a good proxy.
I agree that LMs are concetually more similar to ELIZA than to AGI.
The observation that things that people used to consider intelligent are now considered easy is critical.
The space of stuff remaining that we call intelligent, but AIs cannot yet do, is shrinking. Every time AI eats something, we realize it wasn’t even that complicated.
The reasonable lesson appears to be: we should stop default-thinking things are hard, and we should start thinking that even stupid approaches might be able to do too much.
It’s a statement more about the problem being solved, not the problem solver.
When you stack this on a familiarity with the techniques in use and how they can be transformatively improved with little effort, that’s when you start sweating.
I mean, to me all this indicates is that our conception of “difficult reasoning problems” is wrong and incorrectly linked to our conception of “intelligence”. Like, it shouldn’t be surprising that the LM can solve problems in text which are notoriously based around applying a short step by step algorithm, when it has many examples in the training set.
To me, this says that “just slightly improving our AI architectures to be less dumb” is incredibly hard, because the models that we would have previously expected to be able to solve trivial arithmetic problems if they could do other “harder” problems are unable to do that.
Like, it shouldn’t be surprising that the LM can solve problems in text which are notoriously based around applying a short step by step algorithm, when it has many examples in the training set.
I’m not clear on why it wouldn’t be surprising. The MATH dataset is not easy stuff for most humans. Yes, it’s clear that the algorithm used in the cases where the language models succeeds must fit in constant time and so must be (in a computational sense) simple, but it’s still outperforming a good chunk of humans. I can’t ignore how odd that is. Perhaps human reasoning is uniquely limited in tasks similar to the MATH dataset, AI consuming it isn’t that interesting, and there are no implications for other types of human reasoning, but that’s a high complexity pill to swallow. I’d need to see some evidence to favor a hypothesis like that.
To me, this says that “just slightly improving our AI architectures to be less dumb” is incredibly hard, because the models that we would have previously expected to be able to solve trivial arithmetic problems if they could do other “harder” problems are unable to do that.
It was easily predictable beforehand that a transformer wouldn’t do well at arithmetic (and all non-constant time algorithms), since transformers provably can’t express it in one shot. Every bit of capability they have above what you’d expect from ‘provably incapable of arithmetic’ is what’s worth at least a little bit of a brow-raise.
Moving to non-constant time architectures provably lifts a fundamental constraint, and is empirically shown to increase capability. (Chain of thought prompting does not entirely remove the limiter on the per-iteration expressible algorithms, but makes it more likely that each step is expressible. It’s a half-step toward a more general architecture, and it works.)
It really isn’t hard. No new paradigms are required. The proof of concepts are already implemented and work. It’s more of a question of when one of the big companies decides it’s worth poking with scale.
I don’t think it’s odd at all—even a terrible chess bot can outplay almost all humans. Because most humans haven’t studied chess. MATH is a dataset of problems from high school competitions, which are well known to require a very limited set of math knowledge and be solveable by applying simple algorithms.
I know chain of thought prompting well—it’s not a way to lift a fundamental constraint, it just is a more efficient targeting of the weights which represent what you want in the model.
It really isn’t hard. No new paradigms are required. The proof of concepts are already implemented and work. It’s more of a question of when one of the big companies decides it’s worth poking with scale.
You don’t provide any proof of this, just speculation, much of it based on massive oversimplifications (if I have time I’ll write up a full rebuttal). For example, RWKV is more of a nice idea that is better for some benchmarks, worse for others, than some kind of new architecture that unlocks greater overall capabilities.
MATH is a dataset of problems from high school competitions, which are well known to require a very limited set of math knowledge and be solveable by applying simple algorithms.
I think you may underestimate the difficulty of the MATH dataset. It’s not IMO-level, obviously, but from the original paper:
We also evaluated humans on MATH, and found that a computer science PhD student who does not especially like mathematics attained approximately 40% on MATH, while a three-time IMO gold medalist attained 90%, indicating that MATH can be challenging for humans as well.
Clearly this is not a rigorous evaluation of human ability, but the dataset is far from trivial. Even if it’s not winning IMO golds yet, this level of capability is not something I would have expected to see managed by an AI that provably cannot multiply in one step (if you had asked me in 2015).
{Edit: to further support that this level of performance on MATH was not obvious, this comes from the original paper:
assuming a log-linear scaling trend, models would need around 10^35 parameters to achieve 40% accuracy on MATH, which is impractical.
Further, I’d again point to the hypermind prediction market for a very glaring case of people thinking 50% in MATH was going to take more time than it actually did. I have a hard time accepting that this level of performance was actually expected without the benefit of hindsight.}
I know chain of thought prompting well—it’s not a way to lift a fundamental constraint, it just is a more efficient targeting of the weights which represent what you want in the model.
It was not targeted at time complexity, but it unavoidably involves it and provides some evidence for its contribution.
You don’t provide any proof of this
I disagree that I’ve offered no evidence- the arguments from complexity are solid, there is empirical research confirming the effect, and CoT points in a compelling direction.
I can understand if you find this part of the argument a bit less compelling. I’m deliberately avoiding details until I’m more confident that it’s safe to talk about. (To be clear, I don’t actually think I’ve got the Secret Keys to Dooming Humanity or something; I’m just trying to be sufficiently paranoid.)
I would recommend making concrete predictions on the 1-10 year timescale about performance on these datasets (and on more difficult datasets).
They are simluators (https://www.lesswrong.com/posts/vJFdjigzmcXMhNTsx/simulators), not question answerers. Also, I am sure Minerva does pretty good on this task, probably not 100% reliable but humans are also not 100% reliable if they are required to answer immediately. If you want the ML model to simulate thinking [better], make it solve this task 1000 times and select the most popular answer (which is a quite popular approach for some models already). I think PaLM would be effectively 100% reliable.
The post starts with the realization that we are actually bottlenecked by data and then proceeds to talk about HW acceleration. Deep learning is in a sense a general paradigm, but so is random search. It is actually quite important to have the necessary scale of both compute and data and right now we are not sure about either of them. Not to mention that it is still not clear whether DL actually leads to anything truly intelligent in a practical sense or whether we will simply have very good token predictors with very limited use.
I don’t actually think we’re bottlenecked by data. Chinchilla represents a change in focus (for current architectures), but I think it’s useful to remember what that paper actually told the rest of the field: “hey you can get way better results for way less compute if you do it this way.”
I feel like characterizing Chinchilla most directly as a bottleneck would be missing its point. It was a major capability gain, and it tells everyone else how to get even more capability gain. There are some data-related challenges far enough down the implied path, but we have no reason to believe that they are insurmountable. In fact, it looks an awful lot like it won’t even be very difficult!
With regards to whether deep learning goes anywhere: in order for this to occupy any significant probability mass, I need to hear an argument for how our current dumb architectures do as much as they do, and why that does not imply near-term weirdness. Like, “large transformers are performing {this type of computation} and using {this kind of information}, which we can show has {these bounds} which happens to include all the tasks it has been tested on, but which will not include more worrisome capabilities because {something something something}.”
The space in which that explanation could exist seems small to me. It makes an extremely strong, specific claim, that just so happens to be about exactly where the state of the art in AI is.
Could you explain why you feel that way about Chinchilla? Because I found that post: https://www.lesswrong.com/posts/6Fpvch8RR29qLEWNH/chinchilla-s-wild-implications to give very compelling reasons for why data should be considered a bottleneck and I’m curious what makes you say that it shouldn’t be a problem at all.
Some of my confidence here arises from things that I don’t think would be wise to blab about in public, so my arguments might not be quite as convincing sounding as I’d like, but I’ll give a try.
I wouldn’t quite say it’s not a problem at all, but rather it’s the type of problem that the field is really good at solving. They don’t have to solve ethics or something. They just need to do some clever engineering with the backing of infinite money.
I’d put it at a similar tier of difficulty as scaling up transformers to begin with. That wasn’t nothing! And the industry blew straight through it.
To give some examples that I’m comfortable having in public:
Suppose you stick to text-only training. Could you expand your training sets automatically? Maybe create a higher quality transcription AI and use it to pad your training set using the entirety of youtube?
Maybe you figure out a relatively simple way to extract more juice from a smaller dataset that doesn’t collapse into pathological overfitting.
Maybe you make existing datasets more informative by filtering out sequences that seem to interfere with training.
Maybe you embrace multimodal training where text-only bottlenecks are irrelevant.
Maybe you do it the hard way. What’s a few billion dollars?
Another recent example: https://openreview.net/forum?id=NiEtU7blzN
(I guess this technically covers my “by the end of this year we’ll see at least one large model making progress on Chinchilla” prediction, though apparently it was up even before my prediction!)
What about: State-of-the-art models with 500+B parameters still can’t do 2-digit addition with 100% reliability. For me, this shows that the models are perhaps learning some associative rules from the data, but there is no sign of intelligence. An intelligent agent should notice how addition works after learning from TBs of data. Associative memory can still be useful, but it’s not really an AGI.
As mentioned in the post, that line of argument makes me more alarmed, not less.
We observe these AIs exhibiting soft skills that many people in 2015 would have said were decades away, or maybe even impossible for AI entirely.
We can use these AIs to solve difficult reasoning problems that most humans would do poorly on.
And whatever algorithms this AI is using to go about its reasoning, they’re apparently so simple that the AI can execute them while still struggling on absolutely trivial arithmetic.
WHAT?
Yes, the AI has some blatant holes in its capability. But what we’re seeing is a screaming-hair-on-fire warning that the problems we thought are hard are not hard.
What happens when we just slightly improve our AI architectures to be less dumb?
When will we get robotics results that are not laughable? When “Google put their most advanced AI into a robot brain!!!” (reported on for the third time this year) we got a robot that can deliver a sponge and misplace an empty coke can but not actually clean anything or do anything useful. It’s hard for me to be afraid of a robot that can’t even plug in its own power cable.
When we get results that it is easy for you to be afraid of, it will be firmly too late for safety work.
I believe that over time we will understand that producing human-like text is not a sign of intelligence. In the past people believed that only intelligent agents are able to solve math equations (naturally, since only people can do it and animals can). Then came computer and they were able to do all kinds of calculations much faster and without errors. However, from our current point of view we now understand that doing math calculations is not really that intelligent and even really simple machines can do that. Chess playing is similar story, we thought that you have to be intelligent, but we found a heuristic to do that really well. People were afraid that chess-algorithm-like machines can be programmed to conquer the world, but from our perspective, that’s a ridiculous proposition.
I believe that text generation will be a similar case. We think that you have to be really intelligent to produce human-like outputs, but in the end with enough data, you can produce something that looks nice and it can even be useful sometimes, but there is no intelligence in there. We will slowly develop an intuition about what are the capabilities of large-scale ML models. I believe that in the future we will think about them as basically a kinda fuzzy databases that we can query with natural language. I don’t think that we will think about them as intelligent agents capable of autonomous actions.
You keep distinguishing “intelligence” from “heuristics”, but no one to my knowledge has demonstrated that human intelligence is not itself some set of heuristics. Heuristics are exactly what you’d expect from evolution after all.
So your argument then reduces to a god of the gaps, where we keep discovering some heuristics for an ability that we previously ascribed to intelligence, and the set of capabilities left to “real intelligence” keeps shrinking. Will we eventually be left with the null set, and conclude that humans are not intelligent either? What’s your actual criterion for intelligence that would prevent this outcome?
I believe that fixating on benchmark such as chess etc is ignoring the G part of AGI. Truly intelligent agent should be general at least in the environment he resides in, considering the limitation of its form. E.g. if a robot is physically able to work with everyday object, we might apply Wozniak test and expect that intelligent robot is able to cook a dinner in arbitrary house or do any other task that its form permits.
If we assume that right now we develop purely textual intelligence (without agency, persistent sense of self etc) we might still expect this intelligence to be general. I.e. it is able to solve arbitrary task if it seems reasonable considering its form. In this context for me, an intelligent agent is able to understand common language and act accordingly, e.g. if a question is posed it can provide a truthful answer.
BIG Bench has recently showed us that our current LMs are able to solve some problems, but they are nowhere near general intelligence. They are not able to solve even very simple problems if it actually requires some sort of logical thinking and not only using associative memory, e.g. this is a nice case:
https://github.com/google/BIG-bench/tree/main/bigbench/benchmark_tasks/symbol_interpretation
You can see in the Model performance plots section that scaling did not help at all with tasks like these. This is a very simple task, but it was not seen in the training data so the model struggles to solve it and it produces random results. If the LMs start to solve general linguistic problems, then we are actually having intelligent agents at our hand.
Humans regularly fail at such tasks but I suspect you would still consider humans generally intelligent.
In any case, it seems very plausible that whatever decision procedure is behind more general forms of inference, it will very likely fall to the inexorable march of progress we’ve seen thus far.
If it does, the effectiveness of our compute will potentially increase exponentially almost overnight, since you are basically arguing that our current compute is hobbled by an effectively “weak” associative architecture, but that a very powerful architecture is potentially only one trick away.
The real possibility that we are only one trick away from a potentially terrifying AGI should worry you more.
I don’t see any indication of AGI so it does not really worry me at all. The recent scaling research shows that we need non-trivial number of magnitudes more data and compute to match human-level performance on some benchmarks (with a huge caveat that matching a performance on some benchmark might still not produce intelligence). On the other hand, we are all out of data (especially high quality data with some information value, no random product reviews or NSFW subreddit discussions) and our compute options are also not looking that great (Moore’s law is dead, the fact that we are now relying on HW accelerators is not a good thing, it’s a proof that CPU performance scaling is after 70 years no longer a viable option. There are also some physical limitations that we might not be able to break anytime soon.)
Nobody saw any indication of the atomic bomb before it was created. In hindsight would it have been rational to worry?
Your claims about the about the compute and data needed and alleged limits remind me of the fact that Heisenberg actually thought there was no reason to worry because he had miscalculated the amount of U-235 that would be needed. It seems humans are doomed to continue repeating this mistake and underestimating the severity of catastrophic long tails.
There is no indication for many catastrophic scenarios and truthfully I don’t worry about any of them.
What does “no indication” mean in this context? Can you translate that into probability speak?
No indication in this context means that:
Our current paradigm is almost depleted. We are hitting the wall with both data (PaLM uses 780B tokens, there are 3T tokens publicly available, additional Ts can be found in closed systems, but that’s it) and compute (We will soon hit Landauer’s limit so no more exponentially cheaper computation. Current technology is only three orders of magnitude above this limit).
What we currently have is very similar to what we will ultimately be able to achieve with current paradigm. And it is nowhere near AGI. We need to solve either the data problem or the compute problem.
There is no practical possibility of solving the data problem ⇒ We need a new AI paradigm that does not depend on existing big data.
I assume that we are using existing resource nearly optimally and no significantly more powerful AI paradigm will be created until we have significantly more powerful computers. To have more significantly more powerful computers, we need to sidestep Landauer’s limit, e.g. by using reversible computing or other completely different hardware architecture.
There is no indication that such architecture is currently in development and ready to use. It will probably take decades for such architecture to materialize and it is not even clear whether we are able to build such computer with our current technologies.
We will need several technological revolutions before we will be able to increase our compute significantly. This will hamper the development of AI, perhaps indefinitely. We might need significant advances in material science, quantum science etc to be theoretically able to build computers that are significantly better than what we have today. Then, we will need to develop the AI algorithms to run on them and hope that it is finally enough to reach AGI-levels of compute. Even then, it might take additional decades to actually develop the algorithms.
I don’t think any of the claims you just listed are actually true. I guess we’ll see.
My 8yo is not able to cook dinner in an arbitrary house. Does she have general intelligence?
It is goalpost moving. Basically, it says “current models are not really intelligent”. I don’t think there is much disagreement here. And it’s hard to make any predictions based on that.
Also, “Producing human-like text” is not well defined here; even ELIZA may match this definition. Even the current SOTA may not match it because the adversarial Turning Test has not yet been passed.
It’s not goapost moving, it’s the hype that’s moving. People reduce intelligence to arbitrary skills or problems that are currently being solved and then they are let down when they find out that the skill was actually not a good proxy.
I agree that LMs are concetually more similar to ELIZA than to AGI.
The observation that things that people used to consider intelligent are now considered easy is critical.
The space of stuff remaining that we call intelligent, but AIs cannot yet do, is shrinking. Every time AI eats something, we realize it wasn’t even that complicated.
The reasonable lesson appears to be: we should stop default-thinking things are hard, and we should start thinking that even stupid approaches might be able to do too much.
It’s a statement more about the problem being solved, not the problem solver.
When you stack this on a familiarity with the techniques in use and how they can be transformatively improved with little effort, that’s when you start sweating.
I mean, to me all this indicates is that our conception of “difficult reasoning problems” is wrong and incorrectly linked to our conception of “intelligence”. Like, it shouldn’t be surprising that the LM can solve problems in text which are notoriously based around applying a short step by step algorithm, when it has many examples in the training set.
To me, this says that “just slightly improving our AI architectures to be less dumb” is incredibly hard, because the models that we would have previously expected to be able to solve trivial arithmetic problems if they could do other “harder” problems are unable to do that.
I’m not clear on why it wouldn’t be surprising. The MATH dataset is not easy stuff for most humans. Yes, it’s clear that the algorithm used in the cases where the language models succeeds must fit in constant time and so must be (in a computational sense) simple, but it’s still outperforming a good chunk of humans. I can’t ignore how odd that is. Perhaps human reasoning is uniquely limited in tasks similar to the MATH dataset, AI consuming it isn’t that interesting, and there are no implications for other types of human reasoning, but that’s a high complexity pill to swallow. I’d need to see some evidence to favor a hypothesis like that.
It was easily predictable beforehand that a transformer wouldn’t do well at arithmetic (and all non-constant time algorithms), since transformers provably can’t express it in one shot. Every bit of capability they have above what you’d expect from ‘provably incapable of arithmetic’ is what’s worth at least a little bit of a brow-raise.
Moving to non-constant time architectures provably lifts a fundamental constraint, and is empirically shown to increase capability. (Chain of thought prompting does not entirely remove the limiter on the per-iteration expressible algorithms, but makes it more likely that each step is expressible. It’s a half-step toward a more general architecture, and it works.)
It really isn’t hard. No new paradigms are required. The proof of concepts are already implemented and work. It’s more of a question of when one of the big companies decides it’s worth poking with scale.
I don’t think it’s odd at all—even a terrible chess bot can outplay almost all humans. Because most humans haven’t studied chess. MATH is a dataset of problems from high school competitions, which are well known to require a very limited set of math knowledge and be solveable by applying simple algorithms.
I know chain of thought prompting well—it’s not a way to lift a fundamental constraint, it just is a more efficient targeting of the weights which represent what you want in the model.
You don’t provide any proof of this, just speculation, much of it based on massive oversimplifications (if I have time I’ll write up a full rebuttal). For example, RWKV is more of a nice idea that is better for some benchmarks, worse for others, than some kind of new architecture that unlocks greater overall capabilities.
I think you may underestimate the difficulty of the MATH dataset. It’s not IMO-level, obviously, but from the original paper:
Clearly this is not a rigorous evaluation of human ability, but the dataset is far from trivial. Even if it’s not winning IMO golds yet, this level of capability is not something I would have expected to see managed by an AI that provably cannot multiply in one step (if you had asked me in 2015).
{Edit: to further support that this level of performance on MATH was not obvious, this comes from the original paper:
Further, I’d again point to the hypermind prediction market for a very glaring case of people thinking 50% in MATH was going to take more time than it actually did. I have a hard time accepting that this level of performance was actually expected without the benefit of hindsight.}
It was not targeted at time complexity, but it unavoidably involves it and provides some evidence for its contribution.
I disagree that I’ve offered no evidence- the arguments from complexity are solid, there is empirical research confirming the effect, and CoT points in a compelling direction.
I can understand if you find this part of the argument a bit less compelling. I’m deliberately avoiding details until I’m more confident that it’s safe to talk about. (To be clear, I don’t actually think I’ve got the Secret Keys to Dooming Humanity or something; I’m just trying to be sufficiently paranoid.)
I would recommend making concrete predictions on the 1-10 year timescale about performance on these datasets (and on more difficult datasets).
They are simluators (https://www.lesswrong.com/posts/vJFdjigzmcXMhNTsx/simulators), not question answerers. Also, I am sure Minerva does pretty good on this task, probably not 100% reliable but humans are also not 100% reliable if they are required to answer immediately. If you want the ML model to simulate thinking [better], make it solve this task 1000 times and select the most popular answer (which is a quite popular approach for some models already). I think PaLM would be effectively 100% reliable.