Your observations are basically “At the point where LLM’s are AGI. I will change my mind”
If it solves pokemon one-shot, solves coding or human beings are superfluous for decision making. It’s already practically AGI.
These are bad examples! All you have shown me now is that you can’t think of any serious intermediate steps LLM’s have to go through before they reach AGI.
StopAI
Karma: −9
My view is that all innate reflexes are a form of software operating on the organic turing machine that is our body. For more info on this you can look at the thinking of michael levin and joscha bach.
These are bad examples! Your observations are basically “At the point where LLM’s are AGI. I will change my mind”
If it solves pokemon one-shot, solves coding or human beings are superfluous for decision making in general. I would call that AGI, and if it can code by itself it’s already taking off!
All you have shown me now is that you can’t think of any intermediate steps LLM’s still have to go through before they reach AGI.