Even if there are risks to using analogies with persuasion, we need analogies in order to persuade people. While a lot of people here are strong abstract thinkers, this is really rare. Most people need something more concrete to latch onto. Uniform disarmament here is a losing strategy; and not justified here as I don’t think the analogies are as weak as you think. If you tell me what you consider to be the two weakest analogies above, I’m sure I’d be pretty to steelman at least one of them.
If we want to improve epistemics, a better strategy would probably be to always try to pair analogies (at least for longer texts/within reason). So identify an analogy to describe how you think about AI, identify an alternate plausible analogy for how you should think about it and then explain why your analogy is better/whereabouts you believe AI lies between the two.
Many proponents of AI risk seem happy to critique analogies when they don’t support the desired conclusion, such as the anthropomorphic analogy.
Of course! Has there ever been a single person in the entire world who has embraced all analogies instead of useful and relevant analogies?
Maybe you’re claiming that AI risk proponents reject analogies in general when someone is using an analogy that supports the opposite conclusion, but accepting the validity of analogies when it supports their conclusion. If this were the case, it would be bad, but I don’t actually think this is what is happening. My guess would be that you’ve seen situations where someone has used an analogy to critique AI safety and then the AI safety person said something along the lines, “Analogies are often misleading” and you took this as a rejection of analogies in general as opposed to a reminder to check whether the analogy actually applies.
Even if there are risks to using analogies with persuasion, we need analogies in order to persuade people. While a lot of people here are strong abstract thinkers, this is really rare. Most people need something more concrete to latch onto. Uniform disarmament here is a losing strategy; and not justified here as I don’t think the analogies are as weak as you think. If you tell me what you consider to be the two weakest analogies above, I’m sure I’d be pretty to steelman at least one of them.
If we want to improve epistemics, a better strategy would probably be to always try to pair analogies (at least for longer texts/within reason). So identify an analogy to describe how you think about AI, identify an alternate plausible analogy for how you should think about it and then explain why your analogy is better/whereabouts you believe AI lies between the two.
Of course! Has there ever been a single person in the entire world who has embraced all analogies instead of useful and relevant analogies?
Maybe you’re claiming that AI risk proponents reject analogies in general when someone is using an analogy that supports the opposite conclusion, but accepting the validity of analogies when it supports their conclusion. If this were the case, it would be bad, but I don’t actually think this is what is happening. My guess would be that you’ve seen situations where someone has used an analogy to critique AI safety and then the AI safety person said something along the lines, “Analogies are often misleading” and you took this as a rejection of analogies in general as opposed to a reminder to check whether the analogy actually applies.