I don’t believe for one moment that using a Balrog analogy actually makes people understand the argument when they otherwise wouldn’t.
It is a fallacy to think of AI risk as like Balrogs because someone has written a plausible-sounding story comparing it to Balrogs. And that seems to be the main effect of the Balrog analogy.
I don’t believe for one moment that using a Balrog analogy actually makes people understand the argument when they otherwise wouldn’t.
It is a fallacy to think of AI risk as like Balrogs because someone has written a plausible-sounding story comparing it to Balrogs. And that seems to be the main effect of the Balrog analogy.
I disagree, I think there is value in analogies when used carefully.
Yes, I also agree with this; you have to be careful of implicitly using fiction as evidence.