That doesn’t sound like a moral imperative to me, though my definition may be in need of an update. To my way of thinking, a moral imperative involves a systematic way of ranking alternatives so as to satisfy one or more terminal values (maximizing happiness, obeying god, being virtuous, etc.).
Because it is impossible to do anything other than what you want to do, your moral imperative just reduces down to “do things successfully,” which doesn’t really discriminate among possible alternatives. (Unless it means “do the easiest thing possible, because you’ll be most likely to succeed.” But that doesn’t seem to be what you were getting at.)
As I currently define moral imperatives, they’re meta-wants, structures that tell you what you should want to want, if that makes sense.
it is impossible to do anything other than what you want to do
It is not possible to do something other than what you actually do in a situation. It is possible for non-perfect agents (like, say, humans) to do something other than what they want.
Technically, what I said isn’t a moral imperative, because it doesn’t say anything about what the “want” is. It is, however, advice that (nearly) all minds want to follow.
A meta-moral imperative, then. Whatever it is that you want to do, you should actually do it, and do it in a way that maximizes success. Or in WrongBot’s scheme, whatever it is that you wished you wanted (intended) to do, you should actually want (intend) it.
That doesn’t sound like a moral imperative to me, though my definition may be in need of an update. To my way of thinking, a moral imperative involves a systematic way of ranking alternatives so as to satisfy one or more terminal values (maximizing happiness, obeying god, being virtuous, etc.).
Because it is impossible to do anything other than what you want to do, your moral imperative just reduces down to “do things successfully,” which doesn’t really discriminate among possible alternatives. (Unless it means “do the easiest thing possible, because you’ll be most likely to succeed.” But that doesn’t seem to be what you were getting at.)
As I currently define moral imperatives, they’re meta-wants, structures that tell you what you should want to want, if that makes sense.
It is not possible to do something other than what you actually do in a situation. It is possible for non-perfect agents (like, say, humans) to do something other than what they want.
Technically, what I said isn’t a moral imperative, because it doesn’t say anything about what the “want” is. It is, however, advice that (nearly) all minds want to follow.
A meta-moral imperative, then. Whatever it is that you want to do, you should actually do it, and do it in a way that maximizes success. Or in WrongBot’s scheme, whatever it is that you wished you wanted (intended) to do, you should actually want (intend) it.