I am personally convinced (am a one-time donor myself), but the optimal charity argument in favour of Friendly AI research and development (which will be fully developed in this paper) is something I can use with my friends. They are pretty much the practical type and will definitely respond to wanting more bang for their buck and where their marginal rupee of charity should go.
There are inferential gaps. And when me, a known sci-fi fan presents the argument, I get all sorts of looks. If I have a peer reviewed paper to show them, that would work nicely in my favour.
I think I need to clarify here.
I am personally convinced (am a one-time donor myself), but the optimal charity argument in favour of Friendly AI research and development (which will be fully developed in this paper) is something I can use with my friends. They are pretty much the practical type and will definitely respond to wanting more bang for their buck and where their marginal rupee of charity should go.
There are inferential gaps. And when me, a known sci-fi fan presents the argument, I get all sorts of looks. If I have a peer reviewed paper to show them, that would work nicely in my favour.