In this context for me, an intelligent agent is able to understand common language and act accordingly, e.g. if a question is posed it can provide a truthful answer
Humans regularly fail at such tasks but I suspect you would still consider humans generally intelligent.
In any case, it seems very plausible that whatever decision procedure is behind more general forms of inference, it will very likely fall to the inexorable march of progress we’ve seen thus far.
If it does, the effectiveness of our compute will potentially increase exponentially almost overnight, since you are basically arguing that our current compute is hobbled by an effectively “weak” associative architecture, but that a very powerful architecture is potentially only one trick away.
The real possibility that we are only one trick away from a potentially terrifying AGI should worry you more.
I don’t see any indication of AGI so it does not really worry me at all. The recent scaling research shows that we need non-trivial number of magnitudes more data and compute to match human-level performance on some benchmarks (with a huge caveat that matching a performance on some benchmark might still not produce intelligence). On the other hand, we are all out of data (especially high quality data with some information value, no random product reviews or NSFW subreddit discussions) and our compute options are also not looking that great (Moore’s law is dead, the fact that we are now relying on HW accelerators is not a good thing, it’s a proof that CPU performance scaling is after 70 years no longer a viable option. There are also some physical limitations that we might not be able to break anytime soon.)
I don’t see any indication of AGI so it does not really worry me at all.
Nobody saw any indication of the atomic bomb before it was created. In hindsight would it have been rational to worry?
Your claims about the about the compute and data needed and alleged limits remind me of the fact that Heisenberg actually thought there was no reason to worry because he had miscalculated the amount of U-235 that would be needed. It seems humans are doomed to continue repeating this mistake and underestimating the severity of catastrophic long tails.
Our current paradigm is almost depleted. We are hitting the wall with both data (PaLM uses 780B tokens, there are 3T tokens publicly available, additional Ts can be found in closed systems, but that’s it) and compute (We will soon hit Landauer’s limit so no more exponentially cheaper computation. Current technology is only three orders of magnitude above this limit).
What we currently have is very similar to what we will ultimately be able to achieve with current paradigm. And it is nowhere near AGI. We need to solve either the data problem or the compute problem.
There is no practical possibility of solving the data problem ⇒ We need a new AI paradigm that does not depend on existing big data.
I assume that we are using existing resource nearly optimally and no significantly more powerful AI paradigm will be created until we have significantly more powerful computers. To have more significantly more powerful computers, we need to sidestep Landauer’s limit, e.g. by using reversible computing or other completely different hardware architecture.
There is no indication that such architecture is currently in development and ready to use. It will probably take decades for such architecture to materialize and it is not even clear whether we are able to build such computer with our current technologies.
We will need several technological revolutions before we will be able to increase our compute significantly. This will hamper the development of AI, perhaps indefinitely. We might need significant advances in material science, quantum science etc to be theoretically able to build computers that are significantly better than what we have today. Then, we will need to develop the AI algorithms to run on them and hope that it is finally enough to reach AGI-levels of compute. Even then, it might take additional decades to actually develop the algorithms.
Humans regularly fail at such tasks but I suspect you would still consider humans generally intelligent.
In any case, it seems very plausible that whatever decision procedure is behind more general forms of inference, it will very likely fall to the inexorable march of progress we’ve seen thus far.
If it does, the effectiveness of our compute will potentially increase exponentially almost overnight, since you are basically arguing that our current compute is hobbled by an effectively “weak” associative architecture, but that a very powerful architecture is potentially only one trick away.
The real possibility that we are only one trick away from a potentially terrifying AGI should worry you more.
I don’t see any indication of AGI so it does not really worry me at all. The recent scaling research shows that we need non-trivial number of magnitudes more data and compute to match human-level performance on some benchmarks (with a huge caveat that matching a performance on some benchmark might still not produce intelligence). On the other hand, we are all out of data (especially high quality data with some information value, no random product reviews or NSFW subreddit discussions) and our compute options are also not looking that great (Moore’s law is dead, the fact that we are now relying on HW accelerators is not a good thing, it’s a proof that CPU performance scaling is after 70 years no longer a viable option. There are also some physical limitations that we might not be able to break anytime soon.)
Nobody saw any indication of the atomic bomb before it was created. In hindsight would it have been rational to worry?
Your claims about the about the compute and data needed and alleged limits remind me of the fact that Heisenberg actually thought there was no reason to worry because he had miscalculated the amount of U-235 that would be needed. It seems humans are doomed to continue repeating this mistake and underestimating the severity of catastrophic long tails.
There is no indication for many catastrophic scenarios and truthfully I don’t worry about any of them.
What does “no indication” mean in this context? Can you translate that into probability speak?
No indication in this context means that:
Our current paradigm is almost depleted. We are hitting the wall with both data (PaLM uses 780B tokens, there are 3T tokens publicly available, additional Ts can be found in closed systems, but that’s it) and compute (We will soon hit Landauer’s limit so no more exponentially cheaper computation. Current technology is only three orders of magnitude above this limit).
What we currently have is very similar to what we will ultimately be able to achieve with current paradigm. And it is nowhere near AGI. We need to solve either the data problem or the compute problem.
There is no practical possibility of solving the data problem ⇒ We need a new AI paradigm that does not depend on existing big data.
I assume that we are using existing resource nearly optimally and no significantly more powerful AI paradigm will be created until we have significantly more powerful computers. To have more significantly more powerful computers, we need to sidestep Landauer’s limit, e.g. by using reversible computing or other completely different hardware architecture.
There is no indication that such architecture is currently in development and ready to use. It will probably take decades for such architecture to materialize and it is not even clear whether we are able to build such computer with our current technologies.
We will need several technological revolutions before we will be able to increase our compute significantly. This will hamper the development of AI, perhaps indefinitely. We might need significant advances in material science, quantum science etc to be theoretically able to build computers that are significantly better than what we have today. Then, we will need to develop the AI algorithms to run on them and hope that it is finally enough to reach AGI-levels of compute. Even then, it might take additional decades to actually develop the algorithms.
I don’t think any of the claims you just listed are actually true. I guess we’ll see.