My point is that we should stop relying on analogiesin the first place. Use detailed object-level arguments instead!
Yeah, seriously. As a field, why haven’t we outgrown analogies? It drives me crazy, how many loose and unsupported analogies get thrown around. To a first approximation: Please stop using analogies as arguments.
I sure don’t think I rely much on analogies when I reason or argue about AI risk. I don’t think I need them. I encourage others to use them less as well. It brings clarity of thought and makes it easier to respond to evidence (in my experience).
you say “desperately and monomaniacally” (an analogy between human psychology and an aspect of AI),
“consider two people who are fanatical about diamonds…” (ditto),
“consider a superintelligent sociopath who only cares about making toasters…” (an analogy between human personality disorders and an aspect of AI),
“mother … child” (an analogy between human parent-child relationships and an aspect of AI). Right?
(…and many more…)
I’m guessing you’ll say “yeah but I was making very specific points! That’s very different from someone who just says ‘AI is like aliens in every respect, end of story’”. And I agree!
…But the implication is: if someone is saying “AI is like aliens” in the context of making a very specific point, we should likewise all agree that that’s fine, or more precisely, their argument might or might not be a good argument, but if it’s a bad argument, it’s not a bad argument because it involves an analogy to aliens, per se.
I can give non-analogical explanations of every single case, written in pseudocode or something similarly abstract. Those are mostly communication conveniences, though I agree they shade in connotations from human life, which is indeed a cost. Maybe I should say—I rarely use analogies as load-bearing arguments instead of just shorthand? I rarely use analogies without justifying why they share common mechanisms with the technical subject bieng discussed?
Ehhh, whole field is a pile of analogies. Artificial neural networks have little resemblance to biological, “reward” in reinforcement learning has nothing to do with what we usually mean by reward, “attention” layers certainly doesn’t capture mechanism of psychological attention...
Yeah, seriously. As a field, why haven’t we outgrown analogies? It drives me crazy, how many loose and unsupported analogies get thrown around. To a first approximation: Please stop using analogies as arguments.
I sure don’t think I rely much on analogies when I reason or argue about AI risk. I don’t think I need them. I encourage others to use them less as well. It brings clarity of thought and makes it easier to respond to evidence (in my experience).
Here’s a post you wrote. I claim that it’s full of analogies. :)
E.g.
you say “desperately and monomaniacally” (an analogy between human psychology and an aspect of AI),
“consider two people who are fanatical about diamonds…” (ditto),
“consider a superintelligent sociopath who only cares about making toasters…” (an analogy between human personality disorders and an aspect of AI),
“mother … child” (an analogy between human parent-child relationships and an aspect of AI). Right?
(…and many more…)
I’m guessing you’ll say “yeah but I was making very specific points! That’s very different from someone who just says ‘AI is like aliens in every respect, end of story’”. And I agree!
…But the implication is: if someone is saying “AI is like aliens” in the context of making a very specific point, we should likewise all agree that that’s fine, or more precisely, their argument might or might not be a good argument, but if it’s a bad argument, it’s not a bad argument because it involves an analogy to aliens, per se.
I can give non-analogical explanations of every single case, written in pseudocode or something similarly abstract. Those are mostly communication conveniences, though I agree they shade in connotations from human life, which is indeed a cost. Maybe I should say—I rarely use analogies as load-bearing arguments instead of just shorthand? I rarely use analogies without justifying why they share common mechanisms with the technical subject bieng discussed?
This is The Way :)
I guess I’m missing something crucial here.
How could you reason without analogies about a thing that doesn’t exist (yet)?
Ehhh, whole field is a pile of analogies. Artificial neural networks have little resemblance to biological, “reward” in reinforcement learning has nothing to do with what we usually mean by reward, “attention” layers certainly doesn’t capture mechanism of psychological attention...