Forget about what the social consensus is. If you have technical understanding of current AIs, do you truly believe there are any major obstacles left? The kind of problems that AGI companies could reliably not tear down with their resources? If you do, state so in the comments, but please do not state what those obstacles are.
I guess the reasoning behind the “do not state” request is something like “making potential AGI developers more aware of those obstacles is going to direct more resources into solving those obstacles”. But if someone is trying to create AGI, aren’t they going to run into those obstacles anyway, making it inevitable that they’ll be aware of them in any case?
Yep, but they may well still direct their focus at the wrong things.
See the above example of humans originally focussing on getting AI to beat them at chess, thinking that was going to be the hardest problem and pinnacle. It wasn’t, by a huge margin. It cost a lot of resources and time for what was a very doable problem from the start. And we didn’t gain as much from doing it as we may have gained from focussing on a different problem. Engineers may well end up obsessed with optimising results at particular tasks, while missing out on the fact that other tasks remain completely unaddressed and need more focus. Often, research on basic approaches is far more time consuming, because it is undirected, than research on how to improve an approach that already in principle works, but it becomes far more crucial and more of a bottleneck in the long run.
I guess the reasoning behind the “do not state” request is something like “making potential AGI developers more aware of those obstacles is going to direct more resources into solving those obstacles”. But if someone is trying to create AGI, aren’t they going to run into those obstacles anyway, making it inevitable that they’ll be aware of them in any case?
People are often unaware of what they’re repeatedly running into. Problem formulation can go a long way towards finding a solution.
Yep, but they may well still direct their focus at the wrong things.
See the above example of humans originally focussing on getting AI to beat them at chess, thinking that was going to be the hardest problem and pinnacle. It wasn’t, by a huge margin. It cost a lot of resources and time for what was a very doable problem from the start. And we didn’t gain as much from doing it as we may have gained from focussing on a different problem. Engineers may well end up obsessed with optimising results at particular tasks, while missing out on the fact that other tasks remain completely unaddressed and need more focus. Often, research on basic approaches is far more time consuming, because it is undirected, than research on how to improve an approach that already in principle works, but it becomes far more crucial and more of a bottleneck in the long run.