It seems to me that there are arguments to be made in both directions.[1] It’s not clear to me just yet which stance is correct. Maybe yours is! I don’t know.
My point is that it’s understandable for intelligent people to suspect that there isn’t enough data available yet to produce ASI on the current approach. You might disagree, and maybe your disagreement is even correct, but I don’t think the situation is so vividly clear that it’s incomprehensible why many people wouldn’t be persuaded.
As a quick gesture at the point: as far as I know, all the data LLMs are processing has already gone through a processing filter, namely humans. We produced all the tokens they took in as training data. A newborn, even blind, doesn’t have this limitation — and I’d expect a newborn that was given this limitation somehow very much could have stunted intelligence! I think the analog would be less like a blind newborn and more like a numb one, without tactile or proprioceptive senses.
It seems to me that there are arguments to be made in both directions.[1] It’s not clear to me just yet which stance is correct. Maybe yours is! I don’t know.
My point is that it’s understandable for intelligent people to suspect that there isn’t enough data available yet to produce ASI on the current approach. You might disagree, and maybe your disagreement is even correct, but I don’t think the situation is so vividly clear that it’s incomprehensible why many people wouldn’t be persuaded.
As a quick gesture at the point: as far as I know, all the data LLMs are processing has already gone through a processing filter, namely humans. We produced all the tokens they took in as training data. A newborn, even blind, doesn’t have this limitation — and I’d expect a newborn that was given this limitation somehow very much could have stunted intelligence! I think the analog would be less like a blind newborn and more like a numb one, without tactile or proprioceptive senses.