Recently, I have been trying to reason why I belive what I belive (regarding AGI). However, it appears to me that there is not enough discussion around the arguments against AGI (more specifically AGI skeptisim). This might be of benefit, especially given that
Would this be because the arguments are either too weak or the Ai Safety is biased (understandably) towards imminent AGI?
This might also come out as a reaction from the recent advancements (such as o3) and the alarmant short timelines (less than 3 years). I want to understand the other sides points as well.
Based on what I found on the internet, the main arguments are roughly the following (not exact, given that most of the sources are either informal, such as Wikipedia) :
outdated, usually statements from scientists made around 2010s before LLMs
Ethics-ish arguments, that state dangers from AGI are just simple distractions from the real dangers of AI (racism, bias etc)
Frontier Labs propaganda, simply stating that AGI is happening soon to keep their stakeholders happy and investments coming
Cognitive Science arguments, stating that it is intracable to create a human level mind using a computer
Lecunn type of arguments, where the belief in AGI is there, but the safety considerations are dismissed
What do people think? What are some good resources or researchers that might have a good counterpoint to the imminent AGI path?
[Question] What are the main arguments against AGI?
Recently, I have been trying to reason why I belive what I belive (regarding AGI). However, it appears to me that there is not enough discussion around the arguments against AGI (more specifically AGI skeptisim). This might be of benefit, especially given that
Would this be because the arguments are either too weak or the Ai Safety is biased (understandably) towards imminent AGI?
This might also come out as a reaction from the recent advancements (such as o3) and the alarmant short timelines (less than 3 years). I want to understand the other sides points as well.
Based on what I found on the internet, the main arguments are roughly the following (not exact, given that most of the sources are either informal, such as Wikipedia) :
outdated, usually statements from scientists made around 2010s before LLMs
Ethics-ish arguments, that state dangers from AGI are just simple distractions from the real dangers of AI (racism, bias etc)
Frontier Labs propaganda, simply stating that AGI is happening soon to keep their stakeholders happy and investments coming
Cognitive Science arguments, stating that it is intracable to create a human level mind using a computer
Lecunn type of arguments, where the belief in AGI is there, but the safety considerations are dismissed
What do people think? What are some good resources or researchers that might have a good counterpoint to the imminent AGI path?