Only if those high profile refutations are
a) quick
b) non-reliant upon specialist knowledge
c) seem honest.
I don’t know a huge amount about either issue (which is revealing as an interested lurker and occasional participant here): but I think combining these is tough.
You could try to make it seem honest, but you need certain technical knowledge to really get them, and it’s contentious technical knowledge in that most relevant scientists don’t buy Less Wrong’s take on either. So I might feel an argument seems convincing, but then remember that I can find pro or anti global warming arguments convincing if the person advancing them is far more informed on the scientific issues than me. So this would fail totally on (a) and (b): I’d have to feel I could rely on my own knowledge above the experts that disagree with LW and SIAI, and I have other things to do with my .
You can go for quick and easy, but the argument I’d expect here is the ‘so much to lose from evil AI that it counterbalances low likelihood’ or ‘so much to gain from immortaility that it counterbalances low likelihood’. And both of those simply feel like cheats to most people: it’s too like Pascal’s Wager and feels like a trick that you can play by raising the stakes.
Finally, you can address the root of the suspicions by convincing people that you don’t have the tendencies to be attracted by the idea of a greater mind, a father substitute that can solve the world’s problems, that you don’t look ahead to a golden future age or that you’re intensely relaxed about your own mortality. But I don’t know how you could do that. The last is particularly unbelievable for me.
Only if those high profile refutations are a) quick b) non-reliant upon specialist knowledge c) seem honest.
I don’t know a huge amount about either issue (which is revealing as an interested lurker and occasional participant here): but I think combining these is tough.
You could try to make it seem honest, but you need certain technical knowledge to really get them, and it’s contentious technical knowledge in that most relevant scientists don’t buy Less Wrong’s take on either. So I might feel an argument seems convincing, but then remember that I can find pro or anti global warming arguments convincing if the person advancing them is far more informed on the scientific issues than me. So this would fail totally on (a) and (b): I’d have to feel I could rely on my own knowledge above the experts that disagree with LW and SIAI, and I have other things to do with my .
You can go for quick and easy, but the argument I’d expect here is the ‘so much to lose from evil AI that it counterbalances low likelihood’ or ‘so much to gain from immortaility that it counterbalances low likelihood’. And both of those simply feel like cheats to most people: it’s too like Pascal’s Wager and feels like a trick that you can play by raising the stakes.
Finally, you can address the root of the suspicions by convincing people that you don’t have the tendencies to be attracted by the idea of a greater mind, a father substitute that can solve the world’s problems, that you don’t look ahead to a golden future age or that you’re intensely relaxed about your own mortality. But I don’t know how you could do that. The last is particularly unbelievable for me.