My focus would have wanted to be purely on AGI. I guess, the addition of AGI x-risks was sort of trying to hint at the fact that they would be coming together with it (that is why I mentioned Lecunn).
I feel like there is this strong belief that AGI is just around the corner (and it might very well be), but I wanted to know what is the opposition against such statement. I know that there is a lot of solid proof that we are going towards more intelligent systems, but understanding the gaps in this “given prediction” might provide useful information (either for updating timelines, changing research focus etc.).
Personally, I might be on the “in-between” position, where I am not sure what to believe (in terms of timelines). I am safety inclined, and I applaud the effort of people in the field, but there might be a blind spot in believing that AGI is coming soon (when the reality might be very much different). What if that is not the case? What then? What are the safety research implications? More importantly, what are the implications around the field of AI? Companies and researchers might very well use the hype wave to keep getting financed, get recognition etc.
Perhaps, an analogy would help. Think about cancer. Everyone knows it is true, and that is something that is not going to be argued about (hopefully). Now, I cannot come in and say what are the arguments in support of the existence of cancer, because it is already there and proven to be there. Now, in the context of AGI, I feel like there might be a lot of speculations and a lot of people trying to claim that they knew the perfect day AGI came. It feels sort of like a distraction to me. Even the posts around “getting your things in order”. It feels sort of wrong to just give up on everything without even considering the arguments against the truth you believe in.
These are some very valid points, and it does indeed make sense to ask “who would actually do it/advocate it/steer the industry etc.”. I was just wondering what are the chances of such approach to take-off, but maybe the current climate does not really allow for such major changes to the systems’ architecture.
Maybe my thinking is flawed, but the hope with this post was to confirm whether it would harmful or not to work on neuro-symbolic systems. Another point was to use such a system on benchmarks like ARC-AGI to prove that an alternative to dominating LLMs is possible, while also being to some degree interpretable. The linked post by @tailcalled is a good point, but I also noticed some criticism in the comments regarding concrete examples of how interpretable/less interpretable such probabilistic/symbolic system really are. Perhaps, some research on this question might not be harmful at all, but I think that is my opinion.