Not really. My understanding of AI is far from grandiose, I know less about it than about my fields (Philo, BioAnthro) - I’ve merely read all of FHI, most of MIRI, half of AIMA, Paul’s blog, maybe 4 popular and two technical books on related issues, Max 60 papers on AGI per se, I don’t code, and I only have the coarse grained understanding of it. - But in this little research and time I had to look into it, I saw no convincing evidence for a cap on the level of sophistication that a system’s cognitive abilities can achieve. I have also not seen very robust evidence that would countenance the hypothesis of a fast takeoff.
The fact that we have not fully conceptually disentangled the dimensions of which intelligence is composed is mildly embarassing though, and it may be that AGI is a Deus ex-machina because actually, more as Minsky or Goertzel, less as MIRI or Lesswrong, General Intelligence will turn out to be a plethora of abilities that don’t have a single denominator, ofter superimposed in a robust way.
But for now, nobody who is publishing seems to know for sure.
Not really. My understanding of AI is far from grandiose, I know less about it than about my fields (Philo, BioAnthro) - I’ve merely read all of FHI, most of MIRI, half of AIMA, Paul’s blog, maybe 4 popular and two technical books on related issues, Max 60 papers on AGI per se, I don’t code, and I only have the coarse grained understanding of it. - But in this little research and time I had to look into it, I saw no convincing evidence for a cap on the level of sophistication that a system’s cognitive abilities can achieve. I have also not seen very robust evidence that would countenance the hypothesis of a fast takeoff.
Beware the Dunning–Kruger effect.
Looking at the big picture, you could also say that there convincing evidence for a cap on the lifespan of a biological organism. Heck, some trees have been alive for over 10,000 years! Yet, once you look at the nitty-gritty details of biomedical research, it becomes clear that even adding just a few decades to the human lifespan is a very hard problem and researchers still largely don’t know how to solve it.
It’s the same for AGI. Maybe truly super-human AGI is physically impossible due to complexity reasons, but even if it is possible, developing it is a very hard problem and researchers still largely don’t know how to solve it.
Not really. My understanding of AI is far from grandiose, I know less about it than about my fields (Philo, BioAnthro) - I’ve merely read all of FHI, most of MIRI, half of AIMA, Paul’s blog, maybe 4 popular and two technical books on related issues, Max 60 papers on AGI per se, I don’t code, and I only have the coarse grained understanding of it. - But in this little research and time I had to look into it, I saw no convincing evidence for a cap on the level of sophistication that a system’s cognitive abilities can achieve. I have also not seen very robust evidence that would countenance the hypothesis of a fast takeoff.
The fact that we have not fully conceptually disentangled the dimensions of which intelligence is composed is mildly embarassing though, and it may be that AGI is a Deus ex-machina because actually, more as Minsky or Goertzel, less as MIRI or Lesswrong, General Intelligence will turn out to be a plethora of abilities that don’t have a single denominator, ofter superimposed in a robust way.
But for now, nobody who is publishing seems to know for sure.
Beware the Dunning–Kruger effect.
Looking at the big picture, you could also say that there convincing evidence for a cap on the lifespan of a biological organism. Heck, some trees have been alive for over 10,000 years! Yet, once you look at the nitty-gritty details of biomedical research, it becomes clear that even adding just a few decades to the human lifespan is a very hard problem and researchers still largely don’t know how to solve it.
It’s the same for AGI. Maybe truly super-human AGI is physically impossible due to complexity reasons, but even if it is possible, developing it is a very hard problem and researchers still largely don’t know how to solve it.
I think you misunderstood my claim for sarcasm. I actually think I don`t know much about AI (not nearly enough to make a robust assessment).