Okay, that and your belief that rule-utilitarianism isn’t consequantialism leads me to think that your version of consequentialism is roughly “if you’re attempting to be an FAI and you’re not doing lots of multiplication then you’re doing it wrong”. Too far off?
Instrumental vs terminal goals. Consequentialism is the ideal, but can’t implement it so we have to approximate it with deontological rules due to limitations of our brains. The rules don’t get their moral authority from nowhere, they depend on being useful for reaching the actual goal. Or: the only reason we follow the rules is because we know that we’ll get a worse outcome if we don’t.
Okay, that and your belief that rule-utilitarianism isn’t consequantialism leads me to think that your version of consequentialism is roughly “if you’re attempting to be an FAI and you’re not doing lots of multiplication then you’re doing it wrong”. Too far off?
Instrumental vs terminal goals. Consequentialism is the ideal, but can’t implement it so we have to approximate it with deontological rules due to limitations of our brains. The rules don’t get their moral authority from nowhere, they depend on being useful for reaching the actual goal. Or: the only reason we follow the rules is because we know that we’ll get a worse outcome if we don’t.
It’s the difference between—a priori rules and a posteori rules, I guess?
I’m all for a posteori rules, but not a priori rules.