There are many obstacles with no obvious or money-can-buy solutions.
The claim that current AI is superhuman in just about any task we can benchmark is not correct. The problems being explored are chosen because the researchers think AI have a shot at beating humans at it. Think about how many real world problems we pay other people money to solve that we can benchmark that aren’t being solved by AI. Think about why these problems require humans right now.
My upper bound is much more than 15 years because I don’t feel I have enough information. One thing I worry about is that I feel this community tends to promote confidence, especially when there is news/current events to react to and some leaders have stated their confidence. Sure, condition on new information. But I want to hear more integration of the opposite view beyond strawmaning when a new LLM comes out. It feels like all active voices on LW feels that a 10 or 15 years is the upper bound on when destructive AGI is going to start, which is probably closer to the lower bound for most non LW/rationality-based researchers working on LLMs or deep learning. I want to hear more about the discrepancy beyond ‘they don’t consider the problem the way we do, and we have a better bird’s eye view’. I want to understand how the estimates are arrived on—I feel that if there was more explanation and more variance in the estimates the folks on Hacker News to be able to understand/discuss the gap and not just write off the entire community as crazy as they have here.
Thanks for sharing. You have good points, and so do they. Not engaging with the people actually working on these topics respectfully and on their terms alienated the very people you need.
They also make a great point there, that I see considered a lot in academia and less in this forum: the fact that we don’t need misaligned AGI for us to be in deep trouble with misaligned AI. Unfriendly AI is causing massive difficulties today, right now, very concrete difficulties that we need to find solutions for.
And the fact that a lot of us here are very receptive to claims of companies for what their products can do, which have generally not been written by engineers actually working on them, but by the marketing department. For every programmer I know working in a large company, I’ve heard them rant to no end that their own marketing department and popular science articles and news are representing their work as being able to already do things that it most definitely cannot do, and that they highlight doubt they will get it to do prior to release, let alone to reliably and well.
There are many obstacles with no obvious or money-can-buy solutions.
The claim that current AI is superhuman in just about any task we can benchmark is not correct. The problems being explored are chosen because the researchers think AI have a shot at beating humans at it. Think about how many real world problems we pay other people money to solve that we can benchmark that aren’t being solved by AI. Think about why these problems require humans right now.
My upper bound is much more than 15 years because I don’t feel I have enough information. One thing I worry about is that I feel this community tends to promote confidence, especially when there is news/current events to react to and some leaders have stated their confidence. Sure, condition on new information. But I want to hear more integration of the opposite view beyond strawmaning when a new LLM comes out. It feels like all active voices on LW feels that a 10 or 15 years is the upper bound on when destructive AGI is going to start, which is probably closer to the lower bound for most non LW/rationality-based researchers working on LLMs or deep learning. I want to hear more about the discrepancy beyond ‘they don’t consider the problem the way we do, and we have a better bird’s eye view’. I want to understand how the estimates are arrived on—I feel that if there was more explanation and more variance in the estimates the folks on Hacker News to be able to understand/discuss the gap and not just write off the entire community as crazy as they have here.
Thanks for sharing. You have good points, and so do they. Not engaging with the people actually working on these topics respectfully and on their terms alienated the very people you need.
They also make a great point there, that I see considered a lot in academia and less in this forum: the fact that we don’t need misaligned AGI for us to be in deep trouble with misaligned AI. Unfriendly AI is causing massive difficulties today, right now, very concrete difficulties that we need to find solutions for.
And the fact that a lot of us here are very receptive to claims of companies for what their products can do, which have generally not been written by engineers actually working on them, but by the marketing department. For every programmer I know working in a large company, I’ve heard them rant to no end that their own marketing department and popular science articles and news are representing their work as being able to already do things that it most definitely cannot do, and that they highlight doubt they will get it to do prior to release, let alone to reliably and well.