You don’t think a moral realist will notice that your paper contradicts moral realism and get defensive anyway? Can you write out the thoughts that you’re hoping a moral realist will have after reading your paper?
You don’t think a moral realist will notice that your paper contradicts moral realism and get defensive anyway?
Less so.
Can you write out the thoughts that you’re hoping a moral realist will have after reading your paper?
“All rational beings will be moral, but this paper worries me that AI, while efficient, may not end up being rational. Maybe it’s worth worrying about.”
“All rational beings will be moral, but this paper worries me that AI, while efficient, may not end up being rational. Maybe it’s worth worrying about.”
Why not argue for this directly, instead of making a much stronger claim (“may not” vs “very unlikely”)? If you make a claim that’s too strong, that might lead people to dismiss you instead of thinking that a weaker version of the claim could still be valid. Or they could notice holes in your claimed position and be too busy trying to think of attacks to have the thoughts that you’re hoping for.
(But take this advice with a big grain of salt since I have little idea how academic philosophy works in practice.)
I’m not an expert on academic philosophy either. But I feel the stronger claim might work better; I’ll try and hammer the point “efficiency is not rationality” again and again.
You don’t think a moral realist will notice that your paper contradicts moral realism and get defensive anyway? Can you write out the thoughts that you’re hoping a moral realist will have after reading your paper?
Less so.
“All rational beings will be moral, but this paper worries me that AI, while efficient, may not end up being rational. Maybe it’s worth worrying about.”
Why not argue for this directly, instead of making a much stronger claim (“may not” vs “very unlikely”)? If you make a claim that’s too strong, that might lead people to dismiss you instead of thinking that a weaker version of the claim could still be valid. Or they could notice holes in your claimed position and be too busy trying to think of attacks to have the thoughts that you’re hoping for.
(But take this advice with a big grain of salt since I have little idea how academic philosophy works in practice.)
Actually scratch that and reverse it—I’ve got an idea how to implement your idea in a nice way. Thanks!
I’m not an expert on academic philosophy either. But I feel the stronger claim might work better; I’ll try and hammer the point “efficiency is not rationality” again and again.
I’m confused. “May not” is weaker than “very unlikely,” in the supplied context.