This community mostly isn’t worried about current AI. We’re worried about future AIs.
The rate at which they get there is difficult to predict. But it’s not “anyone’s guess”. People with more time-on-task in thinking about both current AI and what it takes to constitute for a useful, competent, and dangerous mind (e.g., human cognition or hypothetical general AI) tend to have short timelines.
We could be wrong, but assuming it’s a long way off is even more speculative.
That’s why we’re trying to solve alignment ASAP (or, in some cases, arguing that it’s so difficult that we must stop building AGI altogether). It’s not clear which is the better strategy, because we haven’t gotten far enough on alignment theory. That’s why you see a lot of confliciting laims from well-informed people.
But dismissing the whole thing is just wishful thinking. Even when experts do it, it just doesn’t make sense, because there are other experts with equally good arguments that it’s deathly dangerous in the short term. Nobody knows. So seeing non-experts dismiss it because they “trust their intuitions” is somewhere between tragedy and comedy.
If you’re actually interested, see my Capabilities and alignment of LLM cognitive architectures. That’s one we we can get from where we are, which is very limited, to “Real AGI” that will be both useful and dangerous.
This community mostly isn’t worried about current AI. We’re worried about future AIs.
The rate at which they get there is difficult to predict. But it’s not “anyone’s guess”. People with more time-on-task in thinking about both current AI and what it takes to constitute for a useful, competent, and dangerous mind (e.g., human cognition or hypothetical general AI) tend to have short timelines.
We could be wrong, but assuming it’s a long way off is even more speculative.
That’s why we’re trying to solve alignment ASAP (or, in some cases, arguing that it’s so difficult that we must stop building AGI altogether). It’s not clear which is the better strategy, because we haven’t gotten far enough on alignment theory. That’s why you see a lot of confliciting laims from well-informed people.
But dismissing the whole thing is just wishful thinking. Even when experts do it, it just doesn’t make sense, because there are other experts with equally good arguments that it’s deathly dangerous in the short term. Nobody knows. So seeing non-experts dismiss it because they “trust their intuitions” is somewhere between tragedy and comedy.
Thanks.