I think that boundaries […] is an undeniably important concept that is usable for inferring ethical behaviour. But I don’t think a simple “winning” deontology is derivable from this concept.
I see
I’m currently preparing an article where I describe that from the AI engineering perspective, deontology, virtue ethics, and consequentialism
please lmk when you post this. i’ve subscribed to your lw posts too
FWIW, I don’t think the examples given necessarily break «membranes» as a “winning” deontological theory.
A surgeon intruding into the boundaries of a patient is an ethical thing to do.
If the patient has consented, there is no conflict.
(Important note: consent does not always nullify membrane violations. In this case it does, but there are many cases where it doesn’t.)
If AI automated the entire economy, then waited until humanity completely loses the ability to run the civilisation on their own, and then suddenly stopped any maintenance of the automated systems that support the lives of humans and sees how humans die out because they cannot support themselves would be “respecting humans’ boundaries”, but would also be an evil treacherous turn.
I think a way to properly understand this might be.. If Alice makes a promise to Bob, she is essentially giving Bob a piece of herself, and that changes how he plans for the future and whatnot. If she revokes that by terms not part of the original agreement, she has stolen something from Bob, and that is a violation of membranes. ?
If the AI promises to support humans under an agreement, then breaks that agreement, that is theft.
Messing with Hitler’s boundaries (i.e., killing him) in 1940 would be an ethical action from the perspective of most systems that may care about that (individual humans, organisations, countries, communities).
In a case like this I wonder if the theory would also need something like “minimize net boundary violations”, kind of like how some deontologies make murder okay sometimes.
But then this gets really close to utilitarianism and that’s gross imo. So I’m not sure. Maybe there’s another way to address this? Maybe I see what you mean
I see
please lmk when you post this. i’ve subscribed to your lw posts too
FWIW, I don’t think the examples given necessarily break «membranes» as a “winning” deontological theory.
If the patient has consented, there is no conflict.
(Important note: consent does not always nullify membrane violations. In this case it does, but there are many cases where it doesn’t.)
I think a way to properly understand this might be.. If Alice makes a promise to Bob, she is essentially giving Bob a piece of herself, and that changes how he plans for the future and whatnot. If she revokes that by terms not part of the original agreement, she has stolen something from Bob, and that is a violation of membranes. ?
If the AI promises to support humans under an agreement, then breaks that agreement, that is theft.
In a case like this I wonder if the theory would also need something like “minimize net boundary violations”, kind of like how some deontologies make murder okay sometimes.
But then this gets really close to utilitarianism and that’s gross imo. So I’m not sure. Maybe there’s another way to address this? Maybe I see what you mean