As my reacts hopefully implied, this is exactly the kind of clarification I needed—thanks!
Like, bro, I’m saying it can’t think. That’s the tweet. What thinking is, isn’t clear, but That thinking is should be presumed, pending a forceful philosophical conceptual replacement!
Sure, but you’re not preaching to the choir at that point. So surely the next step in that particular dance is to stick a knife in the crack and twist?
That is -
“OK, buddy:
Here’s property P (and if you’re good, Q and R and...) that [would have to]/[is/are obviously natural and desirable to]/[is/are pretty clearly a critical part if you want to] characterize ‘thought’ or ‘reasoning’ as distinct from whatever it is LLMs do when they read their own notes as part of a new prompt and keep chewing them up and spitting the result back as part of the new prompt for itself to read.
Here’s thing T (and if you’re good, U and V and...) that an LLM cannot actually do, even in principle, which would be trivially easy for (say) an uploaded (and sane, functional, reasonably intelligent) human H could do, even if H is denied (almost?) all of their previously consolidated memories and just working from some basic procedural memory and whatever Magical thing this ‘thinking’/‘reasoning’ thing is.”
And if neither you nor anyone else can do either of those things… maybe it’s time to give up and say that this ‘thinking’/‘reasoning’ thing is just philosophically confused? I don’t think that that’s where we’re headed, but I find it important to explicitly acknowledge the possibility; I don’t deal in more than one epiphenomenon at a time and I’m partial to Platonism already. So if this ‘reasoning’ thing isn’t meaningfully distinguishable in some observable way from what LLMs do, why shouldn’t I simply give in?
As my reacts hopefully implied, this is exactly the kind of clarification I needed—thanks!
Sure, but you’re not preaching to the choir at that point. So surely the next step in that particular dance is to stick a knife in the crack and twist?
That is -
And if neither you nor anyone else can do either of those things… maybe it’s time to give up and say that this ‘thinking’/‘reasoning’ thing is just philosophically confused? I don’t think that that’s where we’re headed, but I find it important to explicitly acknowledge the possibility; I don’t deal in more than one epiphenomenon at a time and I’m partial to Platonism already. So if this ‘reasoning’ thing isn’t meaningfully distinguishable in some observable way from what LLMs do, why shouldn’t I simply give in?