Chess playing is similar story, we thought that you have to be intelligent, but we found a heuristic to do that really well.
You keep distinguishing “intelligence” from “heuristics”, but no one to my knowledge has demonstrated that human intelligence is not itself some set of heuristics. Heuristics are exactly what you’d expect from evolution after all.
So your argument then reduces to a god of the gaps, where we keep discovering some heuristics for an ability that we previously ascribed to intelligence, and the set of capabilities left to “real intelligence” keeps shrinking. Will we eventually be left with the null set, and conclude that humans are not intelligent either? What’s your actual criterion for intelligence that would prevent this outcome?
I believe that fixating on benchmark such as chess etc is ignoring the G part of AGI. Truly intelligent agent should be general at least in the environment he resides in, considering the limitation of its form. E.g. if a robot is physically able to work with everyday object, we might apply Wozniak test and expect that intelligent robot is able to cook a dinner in arbitrary house or do any other task that its form permits.
If we assume that right now we develop purely textual intelligence (without agency, persistent sense of self etc) we might still expect this intelligence to be general. I.e. it is able to solve arbitrary task if it seems reasonable considering its form. In this context for me, an intelligent agent is able to understand common language and act accordingly, e.g. if a question is posed it can provide a truthful answer.
BIG Bench has recently showed us that our current LMs are able to solve some problems, but they are nowhere near general intelligence. They are not able to solve even very simple problems if it actually requires some sort of logical thinking and not only using associative memory, e.g. this is a nice case:
You can see in the Model performance plots section that scaling did not help at all with tasks like these. This is a very simple task, but it was not seen in the training data so the model struggles to solve it and it produces random results. If the LMs start to solve general linguistic problems, then we are actually having intelligent agents at our hand.
In this context for me, an intelligent agent is able to understand common language and act accordingly, e.g. if a question is posed it can provide a truthful answer
Humans regularly fail at such tasks but I suspect you would still consider humans generally intelligent.
In any case, it seems very plausible that whatever decision procedure is behind more general forms of inference, it will very likely fall to the inexorable march of progress we’ve seen thus far.
If it does, the effectiveness of our compute will potentially increase exponentially almost overnight, since you are basically arguing that our current compute is hobbled by an effectively “weak” associative architecture, but that a very powerful architecture is potentially only one trick away.
The real possibility that we are only one trick away from a potentially terrifying AGI should worry you more.
I don’t see any indication of AGI so it does not really worry me at all. The recent scaling research shows that we need non-trivial number of magnitudes more data and compute to match human-level performance on some benchmarks (with a huge caveat that matching a performance on some benchmark might still not produce intelligence). On the other hand, we are all out of data (especially high quality data with some information value, no random product reviews or NSFW subreddit discussions) and our compute options are also not looking that great (Moore’s law is dead, the fact that we are now relying on HW accelerators is not a good thing, it’s a proof that CPU performance scaling is after 70 years no longer a viable option. There are also some physical limitations that we might not be able to break anytime soon.)
I don’t see any indication of AGI so it does not really worry me at all.
Nobody saw any indication of the atomic bomb before it was created. In hindsight would it have been rational to worry?
Your claims about the about the compute and data needed and alleged limits remind me of the fact that Heisenberg actually thought there was no reason to worry because he had miscalculated the amount of U-235 that would be needed. It seems humans are doomed to continue repeating this mistake and underestimating the severity of catastrophic long tails.
Our current paradigm is almost depleted. We are hitting the wall with both data (PaLM uses 780B tokens, there are 3T tokens publicly available, additional Ts can be found in closed systems, but that’s it) and compute (We will soon hit Landauer’s limit so no more exponentially cheaper computation. Current technology is only three orders of magnitude above this limit).
What we currently have is very similar to what we will ultimately be able to achieve with current paradigm. And it is nowhere near AGI. We need to solve either the data problem or the compute problem.
There is no practical possibility of solving the data problem ⇒ We need a new AI paradigm that does not depend on existing big data.
I assume that we are using existing resource nearly optimally and no significantly more powerful AI paradigm will be created until we have significantly more powerful computers. To have more significantly more powerful computers, we need to sidestep Landauer’s limit, e.g. by using reversible computing or other completely different hardware architecture.
There is no indication that such architecture is currently in development and ready to use. It will probably take decades for such architecture to materialize and it is not even clear whether we are able to build such computer with our current technologies.
We will need several technological revolutions before we will be able to increase our compute significantly. This will hamper the development of AI, perhaps indefinitely. We might need significant advances in material science, quantum science etc to be theoretically able to build computers that are significantly better than what we have today. Then, we will need to develop the AI algorithms to run on them and hope that it is finally enough to reach AGI-levels of compute. Even then, it might take additional decades to actually develop the algorithms.
You keep distinguishing “intelligence” from “heuristics”, but no one to my knowledge has demonstrated that human intelligence is not itself some set of heuristics. Heuristics are exactly what you’d expect from evolution after all.
So your argument then reduces to a god of the gaps, where we keep discovering some heuristics for an ability that we previously ascribed to intelligence, and the set of capabilities left to “real intelligence” keeps shrinking. Will we eventually be left with the null set, and conclude that humans are not intelligent either? What’s your actual criterion for intelligence that would prevent this outcome?
I believe that fixating on benchmark such as chess etc is ignoring the G part of AGI. Truly intelligent agent should be general at least in the environment he resides in, considering the limitation of its form. E.g. if a robot is physically able to work with everyday object, we might apply Wozniak test and expect that intelligent robot is able to cook a dinner in arbitrary house or do any other task that its form permits.
If we assume that right now we develop purely textual intelligence (without agency, persistent sense of self etc) we might still expect this intelligence to be general. I.e. it is able to solve arbitrary task if it seems reasonable considering its form. In this context for me, an intelligent agent is able to understand common language and act accordingly, e.g. if a question is posed it can provide a truthful answer.
BIG Bench has recently showed us that our current LMs are able to solve some problems, but they are nowhere near general intelligence. They are not able to solve even very simple problems if it actually requires some sort of logical thinking and not only using associative memory, e.g. this is a nice case:
https://github.com/google/BIG-bench/tree/main/bigbench/benchmark_tasks/symbol_interpretation
You can see in the Model performance plots section that scaling did not help at all with tasks like these. This is a very simple task, but it was not seen in the training data so the model struggles to solve it and it produces random results. If the LMs start to solve general linguistic problems, then we are actually having intelligent agents at our hand.
Humans regularly fail at such tasks but I suspect you would still consider humans generally intelligent.
In any case, it seems very plausible that whatever decision procedure is behind more general forms of inference, it will very likely fall to the inexorable march of progress we’ve seen thus far.
If it does, the effectiveness of our compute will potentially increase exponentially almost overnight, since you are basically arguing that our current compute is hobbled by an effectively “weak” associative architecture, but that a very powerful architecture is potentially only one trick away.
The real possibility that we are only one trick away from a potentially terrifying AGI should worry you more.
I don’t see any indication of AGI so it does not really worry me at all. The recent scaling research shows that we need non-trivial number of magnitudes more data and compute to match human-level performance on some benchmarks (with a huge caveat that matching a performance on some benchmark might still not produce intelligence). On the other hand, we are all out of data (especially high quality data with some information value, no random product reviews or NSFW subreddit discussions) and our compute options are also not looking that great (Moore’s law is dead, the fact that we are now relying on HW accelerators is not a good thing, it’s a proof that CPU performance scaling is after 70 years no longer a viable option. There are also some physical limitations that we might not be able to break anytime soon.)
Nobody saw any indication of the atomic bomb before it was created. In hindsight would it have been rational to worry?
Your claims about the about the compute and data needed and alleged limits remind me of the fact that Heisenberg actually thought there was no reason to worry because he had miscalculated the amount of U-235 that would be needed. It seems humans are doomed to continue repeating this mistake and underestimating the severity of catastrophic long tails.
There is no indication for many catastrophic scenarios and truthfully I don’t worry about any of them.
What does “no indication” mean in this context? Can you translate that into probability speak?
No indication in this context means that:
Our current paradigm is almost depleted. We are hitting the wall with both data (PaLM uses 780B tokens, there are 3T tokens publicly available, additional Ts can be found in closed systems, but that’s it) and compute (We will soon hit Landauer’s limit so no more exponentially cheaper computation. Current technology is only three orders of magnitude above this limit).
What we currently have is very similar to what we will ultimately be able to achieve with current paradigm. And it is nowhere near AGI. We need to solve either the data problem or the compute problem.
There is no practical possibility of solving the data problem ⇒ We need a new AI paradigm that does not depend on existing big data.
I assume that we are using existing resource nearly optimally and no significantly more powerful AI paradigm will be created until we have significantly more powerful computers. To have more significantly more powerful computers, we need to sidestep Landauer’s limit, e.g. by using reversible computing or other completely different hardware architecture.
There is no indication that such architecture is currently in development and ready to use. It will probably take decades for such architecture to materialize and it is not even clear whether we are able to build such computer with our current technologies.
We will need several technological revolutions before we will be able to increase our compute significantly. This will hamper the development of AI, perhaps indefinitely. We might need significant advances in material science, quantum science etc to be theoretically able to build computers that are significantly better than what we have today. Then, we will need to develop the AI algorithms to run on them and hope that it is finally enough to reach AGI-levels of compute. Even then, it might take additional decades to actually develop the algorithms.
I don’t think any of the claims you just listed are actually true. I guess we’ll see.
My 8yo is not able to cook dinner in an arbitrary house. Does she have general intelligence?