Ideally, people invoke analogies in order to make a point. And then readers / listeners will argue about whether the point is valid or invalid, and (relatedly) whether the analogy is illuminating or misleading. I think it’s really bad to focus discussion on, and police, the analogy target, i.e. to treat certain targets as better or worse, in and of themselves, separate from the point that’s being made.
For example, Nora was just comparing LLMs to mattresses. And I opened my favorite physics textbook to a random page and there was an prominent analogy between electromagnetic fields and shaking strings. And, heck, Shakespeare compared a woman to a summer’s day!
So when you ask whether AIs are overall more similar to aliens, versus more similar to LLMs, then I reject the question! It’s off-topic. Overall, mattresses & LLMs are very different, and electric fields & strings are very different, and women & summer’s days are very different. But there’s absolutely nothing wrong with analogizing them!
And if someone complained “um, excuse me, but I have to correct you here, actually, LLMs and mattresses are very different, you see, for example, you can sleep on mattresses but you can’t sleep on LLMs, and therefore, Nora, you should not be saying that LLMs are like mattresses”, then I would be very annoyed at that person, and I would think much less of him. (We’ve all talked to people like that, right?)
…And I was correspondingly unhappy to see this post, because I imagine it backing up the annoying-person-who-is-totally-missing-the-point from the previous paragraph. I imagine him saying “You see? I told you! LLMs really are quite different from mattresses, and you shouldn’t analogize them. Check this out, here’s a 2000-word blog post backing me up.”
Of course, people policing the target of analogies (separately from the point being made in context) is a thing that happens all the time, on all sides. I don’t like it, and I want it to stop, and I see this post as pushing things in the wrong direction. For example, this thread is an example where I was defending myself against analogy-target-policing last month. I stand by my analogizing as being appropriate and helpful in context. I’m happy to argue details if you’re interested—it’s a nuclear analogy :-P
I can’t speak to your experience, but some of my reactions to your account are:
if people are policing your analogies between AI and domestic animals, the wrong response is to say that we should instead police analogies between AI and aliens; the right response is to say that analogy-target-policing is the wrong move and we should stop it altogether: we should not police the target of an analogy independently from the point being made in context.
I wonder if what you perceive to be analogy-target-policing is (at least sometimes) actually people just disagreeing with the point that you’re making, i.e. saying that the analogy is misleading in context
Yes lesswrong has some partisans who will downvote anything with insufficiently doomy vibes without thinking too hard about it, sorry, I’m not happy about that either :-P (And vice-versa to some extent on EAForum… or maybe EAF has unthinking partisans on both sides, not sure. And Twitter definitely has an infinite number of annoying unthinking partisans on both sides of every issue.)
For crying out loud, LLMs are already considered “AIs” by most people!
FWIW, you and me and everyone is normally trying to talk about “future AI that might pose an x-risk”, which everyone agrees does not yet exist. A different category is “AI that does not pose an x-risk”, and this is a very big tent, containing everything from Cyc and GOFAI to MuZero and (today’s) LLMs. So the fact that people call some algorithm X by the term “AI” doesn’t in and of itself imply that X is similar to “future AI that might pose an x-risk”, in any nontrivial way—it only implies that in the (trivial) ways that LLMs and MuZero and Cyc are all similar to each other (e.g. they all run on computers).
Now, there is a hypothesis that “AI that might pose an x-risk” is especially similar to LLMs in particular—much more than it is similar to Cyc, or to MuZero. I believe that you put a lot of stock in that hypothesis. And that’s fine—it’s not a crazy hypothesis, even if I happen personally to doubt it. My main complaint is when people forget that it’s a hypothesis, but rather treat it as self-evident truth. (One variant of this is people who understand how LLMs work but don’t understand how MuZero or any other kind of ML works, and instead they just assume that everything in ML is pretty similar to LLMs. I am not accusing you of that.)
Ideally, people invoke analogies in order to make a point. And then readers / listeners will argue about whether the point is valid or invalid, and (relatedly) whether the analogy is illuminating or misleading. I think it’s really bad to focus discussion on, and police, the analogy target, i.e. to treat certain targets as better or worse, in and of themselves, separate from the point that’s being made.
For example, Nora was just comparing LLMs to mattresses. And I opened my favorite physics textbook to a random page and there was an prominent analogy between electromagnetic fields and shaking strings. And, heck, Shakespeare compared a woman to a summer’s day!
So when you ask whether AIs are overall more similar to aliens, versus more similar to LLMs, then I reject the question! It’s off-topic. Overall, mattresses & LLMs are very different, and electric fields & strings are very different, and women & summer’s days are very different. But there’s absolutely nothing wrong with analogizing them!
And if someone complained “um, excuse me, but I have to correct you here, actually, LLMs and mattresses are very different, you see, for example, you can sleep on mattresses but you can’t sleep on LLMs, and therefore, Nora, you should not be saying that LLMs are like mattresses”, then I would be very annoyed at that person, and I would think much less of him. (We’ve all talked to people like that, right?)
…And I was correspondingly unhappy to see this post, because I imagine it backing up the annoying-person-who-is-totally-missing-the-point from the previous paragraph. I imagine him saying “You see? I told you! LLMs really are quite different from mattresses, and you shouldn’t analogize them. Check this out, here’s a 2000-word blog post backing me up.”
Of course, people policing the target of analogies (separately from the point being made in context) is a thing that happens all the time, on all sides. I don’t like it, and I want it to stop, and I see this post as pushing things in the wrong direction. For example, this thread is an example where I was defending myself against analogy-target-policing last month. I stand by my analogizing as being appropriate and helpful in context. I’m happy to argue details if you’re interested—it’s a nuclear analogy :-P
I can’t speak to your experience, but some of my reactions to your account are:
if people are policing your analogies between AI and domestic animals, the wrong response is to say that we should instead police analogies between AI and aliens; the right response is to say that analogy-target-policing is the wrong move and we should stop it altogether: we should not police the target of an analogy independently from the point being made in context.
I wonder if what you perceive to be analogy-target-policing is (at least sometimes) actually people just disagreeing with the point that you’re making, i.e. saying that the analogy is misleading in context
Yes lesswrong has some partisans who will downvote anything with insufficiently doomy vibes without thinking too hard about it, sorry, I’m not happy about that either :-P (And vice-versa to some extent on EAForum… or maybe EAF has unthinking partisans on both sides, not sure. And Twitter definitely has an infinite number of annoying unthinking partisans on both sides of every issue.)
FWIW, you and me and everyone is normally trying to talk about “future AI that might pose an x-risk”, which everyone agrees does not yet exist. A different category is “AI that does not pose an x-risk”, and this is a very big tent, containing everything from Cyc and GOFAI to MuZero and (today’s) LLMs. So the fact that people call some algorithm X by the term “AI” doesn’t in and of itself imply that X is similar to “future AI that might pose an x-risk”, in any nontrivial way—it only implies that in the (trivial) ways that LLMs and MuZero and Cyc are all similar to each other (e.g. they all run on computers).
Now, there is a hypothesis that “AI that might pose an x-risk” is especially similar to LLMs in particular—much more than it is similar to Cyc, or to MuZero. I believe that you put a lot of stock in that hypothesis. And that’s fine—it’s not a crazy hypothesis, even if I happen personally to doubt it. My main complaint is when people forget that it’s a hypothesis, but rather treat it as self-evident truth. (One variant of this is people who understand how LLMs work but don’t understand how MuZero or any other kind of ML works, and instead they just assume that everything in ML is pretty similar to LLMs. I am not accusing you of that.)