And I did say that I didn’t consider the rationality of GPT systems fake just because it was emulated.
The point is that there’s evidence that LLMs might be getting a separate non-emulated version already at the current scale. There is reasoning from emulating people showing their work, and reasoning from predicting their results in any way that works despite the work not being shown. Which requires either making use of other cases of work being shown, or attaining the necessary cognitive processes in some other way, in which case the processes don’t necessarily resemble human reasoning, and in that sense they are not imitating human reasoning.
As I’ve noted in a comment to that post, I’m still not sure that LLM reasoning ends up being very different, even if we are talking about what’s going on inside rather than what the masks are saying out loud, it might convergently end up in approximately the same place. Though Hinton’s recent reminders of how much more facts LLMs manage to squeeze into fewer parameters than human brains have somewhat shaken that intuition for me.
Those are examples of LLMs being rational. LLMs are often rational and will only get better at being rational as they improve. But I’m trying to focus on the times when LLMs are irrational.
I agree that AI is aggregating it’s knowledge to perform rationally. But that still doesn’t mean anything with respect to its capacity to be irrational.
There’s the underlying rationality of the predictor and the second order rationality of the simulacra. Rather like the highly rational intuitive reasoning of humans modulo some bugs, and much less rational high level thought.
I am not disagreeing with you in any of my comments and I’ve strong upvoted your post; your point is very good. I’m disagreeing with fragments to add detail, but I agree with the bulk of it.
The point is that there’s evidence that LLMs might be getting a separate non-emulated version already at the current scale. There is reasoning from emulating people showing their work, and reasoning from predicting their results in any way that works despite the work not being shown. Which requires either making use of other cases of work being shown, or attaining the necessary cognitive processes in some other way, in which case the processes don’t necessarily resemble human reasoning, and in that sense they are not imitating human reasoning.
As I’ve noted in a comment to that post, I’m still not sure that LLM reasoning ends up being very different, even if we are talking about what’s going on inside rather than what the masks are saying out loud, it might convergently end up in approximately the same place. Though Hinton’s recent reminders of how much more facts LLMs manage to squeeze into fewer parameters than human brains have somewhat shaken that intuition for me.
Those are examples of LLMs being rational. LLMs are often rational and will only get better at being rational as they improve. But I’m trying to focus on the times when LLMs are irrational.
I agree that AI is aggregating it’s knowledge to perform rationally. But that still doesn’t mean anything with respect to its capacity to be irrational.
There’s the underlying rationality of the predictor and the second order rationality of the simulacra. Rather like the highly rational intuitive reasoning of humans modulo some bugs, and much less rational high level thought.
Okay, sure. But those “bugs” are probably something the AI risk community should take seriously.
I am not disagreeing with you in any of my comments and I’ve strong upvoted your post; your point is very good. I’m disagreeing with fragments to add detail, but I agree with the bulk of it.
Ah okay. My apologies for misunderstanding.