~All ML researchers and academics that care have already made up their mind regarding whether they prefer to believe in misalignment risks or not. Additional scary papers and demos aren’t going to make anyone budge.
I think this mostly shows that the approach used so far has been ineffective. I don’t think it’s evidence that academics are incapable of changing their minds. Papers and demos seem like the intuitive way to persuade academics, but if this was the case how could they ever come to the conclusion that AI is safe by default, something which is not supported by evidence?
I think the most useful approach right now would be to find out why some researchers are so unconcerned with safety. When you know why someone believes the things they do it is much easier to change their mind.
I find that just identifying the worst case scenario is sometimes enough to help significantly. A lot of the time the worst case scenario seems to be something which is either very unlikely to happen, or that I could deal with easily it if it did happen. So the first thing I’d recommend is to try and find out exactly what you’re affraid of in any given situation.
Another thing I’ve noticed is that some feelings of anxiety seem to be based on intuition whereas others are based on fear, and that the feelings which result from intuition are usually worth to acting on whereas those which result from fear are typically not. For example, the urge to stand alone at the edge of a party and not speak to anyone would be an urge which is based on fear, whereas the feeling of awkwardness and isolation that results from actually standing alone at the edge of the party would be based on correct social intuition. Whenever I make some kind of social mistake I ask myself if I was acting on my intuition or my fear, and in almost every case so far it has been that I was acting on my fear not my intuition, usually my intuition was to do exactly the opposite.