“dedicated committee of human-level AIs dedicated” repeats the same adjective in a small span.
More wide-ranging:
Perhaps the paper would be stronger if it explained why philosophers might feel that convergence is probable. For example, in their experience, human philosophers / philosophies converge.
In a society, where the members are similar to one another, and much less powerful than the society as a whole, the morality endorsed by the society might be based on the memes that can spread successfully. That is, a meme like ‘everyone gets a vote’ or the golden rule is a meme that is symmetrical in a way that ‘so-and-so gets the vote’ isn’t. The memes that spread successfully might be more likely to be symmetrical. There could be convergent evolution of memes, and this could explain human philosophers experience of convergence.
Perhaps the paper would be stronger if it explained why philosophers might feel that convergence is probable. For example, in their experience, human philosophers / philosophies converge.
I’m deliberately avoiding that route. If I attack, or mention, moral realism in any form, philosophers are going to get defensive. I’m hoping to skirt the issue by narrowing the connotations of the terms (efficiency rather than intelligence and, especially, rationality).
You don’t think a moral realist will notice that your paper contradicts moral realism and get defensive anyway? Can you write out the thoughts that you’re hoping a moral realist will have after reading your paper?
You don’t think a moral realist will notice that your paper contradicts moral realism and get defensive anyway?
Less so.
Can you write out the thoughts that you’re hoping a moral realist will have after reading your paper?
“All rational beings will be moral, but this paper worries me that AI, while efficient, may not end up being rational. Maybe it’s worth worrying about.”
“All rational beings will be moral, but this paper worries me that AI, while efficient, may not end up being rational. Maybe it’s worth worrying about.”
Why not argue for this directly, instead of making a much stronger claim (“may not” vs “very unlikely”)? If you make a claim that’s too strong, that might lead people to dismiss you instead of thinking that a weaker version of the claim could still be valid. Or they could notice holes in your claimed position and be too busy trying to think of attacks to have the thoughts that you’re hoping for.
(But take this advice with a big grain of salt since I have little idea how academic philosophy works in practice.)
I’m not an expert on academic philosophy either. But I feel the stronger claim might work better; I’ll try and hammer the point “efficiency is not rationality” again and again.
Minor text correction;
“dedicated committee of human-level AIs dedicated” repeats the same adjective in a small span.
More wide-ranging:
Perhaps the paper would be stronger if it explained why philosophers might feel that convergence is probable. For example, in their experience, human philosophers / philosophies converge.
In a society, where the members are similar to one another, and much less powerful than the society as a whole, the morality endorsed by the society might be based on the memes that can spread successfully. That is, a meme like ‘everyone gets a vote’ or the golden rule is a meme that is symmetrical in a way that ‘so-and-so gets the vote’ isn’t. The memes that spread successfully might be more likely to be symmetrical. There could be convergent evolution of memes, and this could explain human philosophers experience of convergence.
I’m deliberately avoiding that route. If I attack, or mention, moral realism in any form, philosophers are going to get defensive. I’m hoping to skirt the issue by narrowing the connotations of the terms (efficiency rather than intelligence and, especially, rationality).
You don’t think a moral realist will notice that your paper contradicts moral realism and get defensive anyway? Can you write out the thoughts that you’re hoping a moral realist will have after reading your paper?
Less so.
“All rational beings will be moral, but this paper worries me that AI, while efficient, may not end up being rational. Maybe it’s worth worrying about.”
Why not argue for this directly, instead of making a much stronger claim (“may not” vs “very unlikely”)? If you make a claim that’s too strong, that might lead people to dismiss you instead of thinking that a weaker version of the claim could still be valid. Or they could notice holes in your claimed position and be too busy trying to think of attacks to have the thoughts that you’re hoping for.
(But take this advice with a big grain of salt since I have little idea how academic philosophy works in practice.)
Actually scratch that and reverse it—I’ve got an idea how to implement your idea in a nice way. Thanks!
I’m not an expert on academic philosophy either. But I feel the stronger claim might work better; I’ll try and hammer the point “efficiency is not rationality” again and again.
I’m confused. “May not” is weaker than “very unlikely,” in the supplied context.