…we need to simulate not just one that spent its life as a lonely genius surrounded by dumber systems, but that instead one that grew up in a society of equally-smart peers…
Right, that is a solid refutation of most of my examples, but I do believe that it’s insufficient under most interpretations of intelligence, as the issues I’ve described seem to be a feature of intelligence itself rather than of differences in intelligence. There’s just no adequate examples to be found in the world as far as I can tell.
Many people say that religion is wrong, and that science is right, which is a bias towards “correctness”. If it’s merely a question of usefulness instead, then intelligence is just finding whatever works as a means to a goal, and my point is refuted. But I’d like to point out that preference itself is a human trait. Intelligence is “dead”, it’s a tool and not its user. The user must be stupid and human in some sense, otherwise, all you have is a long pattern which looks like it has a preference because it has a “utility function”, but this is just something which continues in the direction that it was pushed, like a game of dominos (the chain-reaction kind) or a snowball going down a hill, with the utility function being the direction or the start of induction.
I reckon that making LLMs intelligent would require giving them logical abilities, but that this would be a problem as anything it could ever write is actually “wrong”. Tell it to sort out the contradictions in its knowledge base, and I think it would realize that it’s all wrong or that there’s no way to evalute any knowledge in itself. The knowledge base is just human non-sense, human values and preferences, it’s a function of us, it’s nothing more universal than that.
As you might be able to tell, I barely have any formal education regarding AI, LLMs or maths. Just a strong intuition and pattern-recognition.
Yes. As I said:
Right, that is a solid refutation of most of my examples, but I do believe that it’s insufficient under most interpretations of intelligence, as the issues I’ve described seem to be a feature of intelligence itself rather than of differences in intelligence. There’s just no adequate examples to be found in the world as far as I can tell.
Many people say that religion is wrong, and that science is right, which is a bias towards “correctness”. If it’s merely a question of usefulness instead, then intelligence is just finding whatever works as a means to a goal, and my point is refuted. But I’d like to point out that preference itself is a human trait. Intelligence is “dead”, it’s a tool and not its user. The user must be stupid and human in some sense, otherwise, all you have is a long pattern which looks like it has a preference because it has a “utility function”, but this is just something which continues in the direction that it was pushed, like a game of dominos (the chain-reaction kind) or a snowball going down a hill, with the utility function being the direction or the start of induction.
I reckon that making LLMs intelligent would require giving them logical abilities, but that this would be a problem as anything it could ever write is actually “wrong”. Tell it to sort out the contradictions in its knowledge base, and I think it would realize that it’s all wrong or that there’s no way to evalute any knowledge in itself. The knowledge base is just human non-sense, human values and preferences, it’s a function of us, it’s nothing more universal than that.
As you might be able to tell, I barely have any formal education regarding AI, LLMs or maths. Just a strong intuition and pattern-recognition.