I genuinely don’t know what you want elaboration of. Reacts are nice for what they are, but saying something out loud about what you want to hear more about / what’s confusing / what you did and didn’t understand/agree with, is more helpful.
Re/ “to whom not...”, I’m asking Wei: what groups of people would not be described by the list of 6 “underestimating the difficult of philosophy” things? It seems to me that broadly, EAs and “AI alignment” people tend to favor somewhat too concrete touchpoints like “well, suppressing revolts in the past has gone like such and such, so we should try to do similar for AGI”. And broadly they don’t credit an abstract argument about why something won’t work, or would only work given substantial further philosophical insight.
Re/ “don’t think thinking …”, well, if I say “LLMs basically don’t think”, they’re like “sure it does, I can keep prompting it and it says more things, and I can even put that in a scaffold” or “what concrete behavior can you point to that it can’t do”. Like, bro, I’m saying it can’t think. That’s the tweet. What thinking is, isn’t clear, but That thinking is should be presumed, pending a forceful philosophical conceptual replacement!
That is, in fact, a helpful elaboration! When you said
Most people who “work on AI alignment” don’t even think that thinking is a thing.
my leading hypotheses for what you could mean were:
Using thought, as a tool, has not occured to most such people
Most such people have no concept whatsoever of cognition as being a thing, the way people in the year 1000 had no concept whatsoever of javascript being a thing.
Now, instead, my leading hypothesis is that you mean:
Most such people are failing to notice that there’s an important process, called “thinking”, which humans do but LLMs “basically” don’t do.
This is a bunch more precise! For one, it mentions AIs at all.
As my reacts hopefully implied, this is exactly the kind of clarification I needed—thanks!
Like, bro, I’m saying it can’t think. That’s the tweet. What thinking is, isn’t clear, but That thinking is should be presumed, pending a forceful philosophical conceptual replacement!
Sure, but you’re not preaching to the choir at that point. So surely the next step in that particular dance is to stick a knife in the crack and twist?
That is -
“OK, buddy:
Here’s property P (and if you’re good, Q and R and...) that [would have to]/[is/are obviously natural and desirable to]/[is/are pretty clearly a critical part if you want to] characterize ‘thought’ or ‘reasoning’ as distinct from whatever it is LLMs do when they read their own notes as part of a new prompt and keep chewing them up and spitting the result back as part of the new prompt for itself to read.
Here’s thing T (and if you’re good, U and V and...) that an LLM cannot actually do, even in principle, which would be trivially easy for (say) an uploaded (and sane, functional, reasonably intelligent) human H could do, even if H is denied (almost?) all of their previously consolidated memories and just working from some basic procedural memory and whatever Magical thing this ‘thinking’/‘reasoning’ thing is.”
And if neither you nor anyone else can do either of those things… maybe it’s time to give up and say that this ‘thinking’/‘reasoning’ thing is just philosophically confused? I don’t think that that’s where we’re headed, but I find it important to explicitly acknowledge the possibility; I don’t deal in more than one epiphenomenon at a time and I’m partial to Platonism already. So if this ‘reasoning’ thing isn’t meaningfully distinguishable in some observable way from what LLMs do, why shouldn’t I simply give in?
@Nate Showell @P. @Tetraspace @Joseph Miller @Lorxus
I genuinely don’t know what you want elaboration of. Reacts are nice for what they are, but saying something out loud about what you want to hear more about / what’s confusing / what you did and didn’t understand/agree with, is more helpful.
Re/ “to whom not...”, I’m asking Wei: what groups of people would not be described by the list of 6 “underestimating the difficult of philosophy” things? It seems to me that broadly, EAs and “AI alignment” people tend to favor somewhat too concrete touchpoints like “well, suppressing revolts in the past has gone like such and such, so we should try to do similar for AGI”. And broadly they don’t credit an abstract argument about why something won’t work, or would only work given substantial further philosophical insight.
Re/ “don’t think thinking …”, well, if I say “LLMs basically don’t think”, they’re like “sure it does, I can keep prompting it and it says more things, and I can even put that in a scaffold” or “what concrete behavior can you point to that it can’t do”. Like, bro, I’m saying it can’t think. That’s the tweet. What thinking is, isn’t clear, but That thinking is should be presumed, pending a forceful philosophical conceptual replacement!
That is, in fact, a helpful elaboration! When you said
my leading hypotheses for what you could mean were:
Using thought, as a tool, has not occured to most such people
Most such people have no concept whatsoever of cognition as being a thing, the way people in the year 1000 had no concept whatsoever of javascript being a thing.
Now, instead, my leading hypothesis is that you mean:
Most such people are failing to notice that there’s an important process, called “thinking”, which humans do but LLMs “basically” don’t do.
This is a bunch more precise! For one, it mentions AIs at all.
As my reacts hopefully implied, this is exactly the kind of clarification I needed—thanks!
Sure, but you’re not preaching to the choir at that point. So surely the next step in that particular dance is to stick a knife in the crack and twist?
That is -
And if neither you nor anyone else can do either of those things… maybe it’s time to give up and say that this ‘thinking’/‘reasoning’ thing is just philosophically confused? I don’t think that that’s where we’re headed, but I find it important to explicitly acknowledge the possibility; I don’t deal in more than one epiphenomenon at a time and I’m partial to Platonism already. So if this ‘reasoning’ thing isn’t meaningfully distinguishable in some observable way from what LLMs do, why shouldn’t I simply give in?