For example, Doctor Evil credibly commits to light a school on fire if you don’t give him $10 million. I would consider refusal to pay up in this situation as non-blameworthy, even though it causally leads to a bunch of dead schoolchildren.
The difference between the Dr. Evil example and the revealing clothing example is that if everyone precomits to not negotiating with hostage takers, Dr. Evil wouldn’t even bother with his threat; whereas a precomitment to ignore the presence of sexual predators when deciding what to wear won’t discourage them. The clothing example is in fact similar to the locked house example, I mentioned here.
Yes. I think that all deontological or virtue-ethics rules that actually make sense are actually approximations to rule consequentialism when it’d be too computationally expensive to compute from scratch and/or fudge factors to compensate for systematic errors introduced by our corrupted hardware.
Game theory issues I mentioned (e.g., UDT, the other big one being Schelling points) are not quite the same thing as having bad approximations. Since it’s impossible to have a good approximation of another agent of comparable power, even in principal.
I didn’t mean the approximations are bad. I meant that the ‘fundamental’ morality is rule (i.e. UDT) consequentialism, and the only reason we have to use other stuff is that we don’t have unlimited computational power, much like we use aerodynamics to study airplanes because it’s unfeasible to use quantum field theory for that.
My point is that once you add UDT to consequentialism it becomes very similar to deontology. For example, Kant’s Categorical Imperative can be thought of as a special case of UDT.
My point is that once you add UDT to consequentialism it becomes very similar to deontology.
UDT doesn’t need to be added to consequentialism, or the reverse. UDT is already based on consequentialist assumptions and any reasonably advanced way of thinking about consequences will result in a decision theory along those lines.
It is only people’s muddled intuitions about UDT and similar reflexive decision theories that makes it seem to them that they are remotely deontological. Particularly those inclined to use UDT as an “excuse” to cooperate when they just want that to be the right thing to do for other reasons.
For example, Kant’s Categorical Imperative can be thought of as a special case of UDT.
It is only people’s muddled intuitions about UDT and similar reflexive decision theories that makes it seem to them that they are remotely deontological.
From what I infer, people who think deontologically already seem to reason “The most effective decision to make as evaluated by UDT is Cooperate in this situation in which CDT picks Defect. This feels all moral to me. UDT must be on my side. I claim UDT is deontological because we agree regarding this particular issue.” This leads to people saying “Using UDT/TDT reasoning...” in places where UDT doesn’t reason in any such way.
UDT is “deontological” if and only if that deontological system consists of or is equivalent to the rule “It is an ethical duty to behave like a consequentialist implementing UDT”. ie. It just isn’t.
You may want to look at various decision theories particularly updateless decision theory and its variants.
The difference between the Dr. Evil example and the revealing clothing example is that if everyone precomits to not negotiating with hostage takers, Dr. Evil wouldn’t even bother with his threat; whereas a precomitment to ignore the presence of sexual predators when deciding what to wear won’t discourage them. The clothing example is in fact similar to the locked house example, I mentioned here.
Yes. I think that all deontological or virtue-ethics rules that actually make sense are actually approximations to rule consequentialism when it’d be too computationally expensive to compute from scratch and/or fudge factors to compensate for systematic errors introduced by our corrupted hardware.
Game theory issues I mentioned (e.g., UDT, the other big one being Schelling points) are not quite the same thing as having bad approximations. Since it’s impossible to have a good approximation of another agent of comparable power, even in principal.
I didn’t mean the approximations are bad. I meant that the ‘fundamental’ morality is rule (i.e. UDT) consequentialism, and the only reason we have to use other stuff is that we don’t have unlimited computational power, much like we use aerodynamics to study airplanes because it’s unfeasible to use quantum field theory for that.
My point is that once you add UDT to consequentialism it becomes very similar to deontology. For example, Kant’s Categorical Imperative can be thought of as a special case of UDT.
UDT doesn’t need to be added to consequentialism, or the reverse. UDT is already based on consequentialist assumptions and any reasonably advanced way of thinking about consequences will result in a decision theory along those lines.
It is only people’s muddled intuitions about UDT and similar reflexive decision theories that makes it seem to them that they are remotely deontological. Particularly those inclined to use UDT as an “excuse” to cooperate when they just want that to be the right thing to do for other reasons.
Better yet, it can be thought of as just not UDT at all.
Why?
You tell me. It’s not my confusion.
From what I infer, people who think deontologically already seem to reason “The most effective decision to make as evaluated by UDT is Cooperate in this situation in which CDT picks Defect. This feels all moral to me. UDT must be on my side. I claim UDT is deontological because we agree regarding this particular issue.” This leads to people saying “Using UDT/TDT reasoning...” in places where UDT doesn’t reason in any such way.
UDT is “deontological” if and only if that deontological system consists of or is equivalent to the rule “It is an ethical duty to behave like a consequentialist implementing UDT”. ie. It just isn’t.
Rather what distinction are you drawing between UDT/TDT-like decision theories and Kant’s CI?
I count rule consequentialism as a flavour of consequentialism, not as a flavour of deontology.
I agree, but I’d argue that UDT is more than standard rule consequentialism.
I’d put it as TDT, UDT etc. being attempts to formalize rule consequentialism rigorously enough for an AI.