and then rely on my internal motivational drives to find those facts compelling.
Doesn’t this assume that our internal motivational drives, our core values, are sufficiently aligned that our “oughts” also align? This strikes me as an unreasonable assumption.
If you’re trying to convince me to do some thing X, then you must want me to do X, too. So we must be at least that aligned.
We don’t have to be aligned in every regard. And you needn’t yourself value every consequence of X that you hold up to me to entice me to X. But you do have to understand me well enough to know that I find that consequence enticing.
But that seems to me to be both plausible and enough to support the kind of dialectical moral argumentation that I’m talking about.
Doesn’t this assume that our internal motivational drives, our core values, are sufficiently aligned that our “oughts” also align? This strikes me as an unreasonable assumption.
If you’re trying to convince me to do some thing X, then you must want me to do X, too. So we must be at least that aligned.
We don’t have to be aligned in every regard. And you needn’t yourself value every consequence of X that you hold up to me to entice me to X. But you do have to understand me well enough to know that I find that consequence enticing.
But that seems to me to be both plausible and enough to support the kind of dialectical moral argumentation that I’m talking about.