Those are examples of LLMs being rational. LLMs are often rational and will only get better at being rational as they improve. But I’m trying to focus on the times when LLMs are irrational.
I agree that AI is aggregating it’s knowledge to perform rationally. But that still doesn’t mean anything with respect to its capacity to be irrational.
There’s the underlying rationality of the predictor and the second order rationality of the simulacra. Rather like the highly rational intuitive reasoning of humans modulo some bugs, and much less rational high level thought.
I am not disagreeing with you in any of my comments and I’ve strong upvoted your post; your point is very good. I’m disagreeing with fragments to add detail, but I agree with the bulk of it.
Those are examples of LLMs being rational. LLMs are often rational and will only get better at being rational as they improve. But I’m trying to focus on the times when LLMs are irrational.
I agree that AI is aggregating it’s knowledge to perform rationally. But that still doesn’t mean anything with respect to its capacity to be irrational.
There’s the underlying rationality of the predictor and the second order rationality of the simulacra. Rather like the highly rational intuitive reasoning of humans modulo some bugs, and much less rational high level thought.
Okay, sure. But those “bugs” are probably something the AI risk community should take seriously.
I am not disagreeing with you in any of my comments and I’ve strong upvoted your post; your point is very good. I’m disagreeing with fragments to add detail, but I agree with the bulk of it.
Ah okay. My apologies for misunderstanding.