This reminds me of Counterfactual Harm https://arxiv.org/pdf/2204.12993.pdf where the authors define harm to Agent 1 by Agent 2 as the counterfactual of Agent 2′s actions. However, this also requires defining what the acceptable “default action” is. For example, one couldn’t expect a mule farmer to save the life of someone having a heart attack, and so the mule farmer hasn’t done “harm” if they didn’t help successfully, but we would expect a doctor to help, and they have harmed if they haven’t helped.
However, they also admit:
we do not provide a method for determining the desired default action or policy in general
I believe that «membranes» (what Critch calls «boundaries») can provide these defaults.
I think it might be possible to determine moral defaults from a simple premise: “it’s unworkable to rely on forcing anyone to do anything that they haven’t agreed to”. In which case, if Alice can’t control Bob, then all she can do is “mind her own business”. She may want to control others, but she can’t.
Put another way: There are things that only Bob can do that Alice cannot meddle in.
For example, I cannot control your actions and I cannot observe your subjective experience, and you cannot me. I call this fact “individual sovereignty”.
And I think individual sovereignty is the default. I think this is then where the most fundamental moral “defaults” come from.
Of course, we can make extra agreements with others on top of that, but crucially there is a finite number of limited-scope social attracts that are ~explicitly added on top of that default.
For example, a patient and a doctor enter into a contract where the doctor agrees to give service, and the patient agrees to pay. This contract is then also enforced by a larger force (like the government) that enforces other contracts, like the one that says they will send the doctor to jail if he breaks the law.
Social contracts can add on top of the individual sovereignty default, albeit to a limited extent.
Another example: Duty to Rescue laws obligate not that you “mind your own business”, but that you actively try to save people near you in trouble. But everyone “agrees” to it by living in their society, so it works.
The above should address the doctor and mule farmer examples.
In sum: I think “individual sovereignty (baseline) + finite social contracts (extra, subjective)” is enough to (fully?) determine moral defaults
Put another way: “Never expect to be able to force anyone to do anything, except when they’ve agreed”
Of course, it would be nice to live in a world where everyone helps others as much as they can all the time, but that violates the premise and I think it is unworkable. (Though, in the morality literature it doesn’t seem uncommon to assume that you, the ethicist, get to decide what other people do, AFAICT?)
This reminds me of Counterfactual Harm https://arxiv.org/pdf/2204.12993.pdf where the authors define harm to Agent 1 by Agent 2 as the counterfactual of Agent 2′s actions. However, this also requires defining what the acceptable “default action” is. For example, one couldn’t expect a mule farmer to save the life of someone having a heart attack, and so the mule farmer hasn’t done “harm” if they didn’t help successfully, but we would expect a doctor to help, and they have harmed if they haven’t helped.
However, they also admit:
I believe that «membranes» (what Critch calls «boundaries») can provide these defaults.
I think it might be possible to determine moral defaults from a simple premise: “it’s unworkable to rely on forcing anyone to do anything that they haven’t agreed to”. In which case, if Alice can’t control Bob, then all she can do is “mind her own business”. She may want to control others, but she can’t.
Put another way: There are things that only Bob can do that Alice cannot meddle in.
(This can then be formalized in terms of Markov blankets.)
For example, I cannot control your actions and I cannot observe your subjective experience, and you cannot me. I call this fact “individual sovereignty”.
And I think individual sovereignty is the default. I think this is then where the most fundamental moral “defaults” come from.
Of course, we can make extra agreements with others on top of that, but crucially there is a finite number of limited-scope social attracts that are ~explicitly added on top of that default.
For example, a patient and a doctor enter into a contract where the doctor agrees to give service, and the patient agrees to pay. This contract is then also enforced by a larger force (like the government) that enforces other contracts, like the one that says they will send the doctor to jail if he breaks the law.
Social contracts can add on top of the individual sovereignty default, albeit to a limited extent.
Another example: Duty to Rescue laws obligate not that you “mind your own business”, but that you actively try to save people near you in trouble. But everyone “agrees” to it by living in their society, so it works.
The above should address the doctor and mule farmer examples.
In sum: I think “individual sovereignty (baseline) + finite social contracts (extra, subjective)” is enough to (fully?) determine moral defaults
Put another way: “Never expect to be able to force anyone to do anything, except when they’ve agreed”
Of course, it would be nice to live in a world where everyone helps others as much as they can all the time, but that violates the premise and I think it is unworkable. (Though, in the morality literature it doesn’t seem uncommon to assume that you, the ethicist, get to decide what other people do, AFAICT?)