FWIW, I think a morality based on minimizing «membrane/boundary»[1] violations could possibly avoid the issues outlined here. That is, a form of deontology where the rule is ~”respect the «membranes/boundaries» of sovereign agents”. (And I think this works because I think «membranes/boundaries» are universally observable.)
I’m excited about «membranes/boundaries» because in one swoop it captures everything or almost everything that intuitively seems bad, but is otherwise hard to describe, about the examples here: https://arbital.com/p/low_impact/
For example, from ~”respect the «membranes/boundaries» of sovereign agents”, you naturally derive:
Don’t kill people
Don’t control people / violate sovereignty
Don’t interfere in other people’s problems without permission
Don’t coddle people
I’m working on this myself right now, though not entirely in a moral philosophy direction. If anyone wants to take this on (eg in a formal moral philosophy direction), I would be eager to help you!
I will continue to publish posts about this topic on my LW account, subscribe to my posts to get notified.
You can see «Boundaries» Sequencefor a longer explanation, but I will excerpt from a more recent post by Andrew Critch, 2023 March:
By boundaries, I just mean the approximate causal separation of regions in some kind of physical space (e.g., spacetime) or abstract space (e.g., cyberspace). Here are some examples from my «Boundaries» Sequence:
a cell membrane (separates the inside of a cell from the outside);
a person’s skin (separates the inside of their body from the outside);
a fence around a family’s yard (separates the family’s place of living-together from neighbors and others);
a digital firewall around a local area network (separates the LAN and its users from the rest of the internet);
a sustained disassociation of social groups (separates the two groups from each other)
a national border (separates a state from neighboring states or international waters).
Glad to have this flagged here, thanks. As I’ve said to @Chipmonk privately, I think this sort of boundaries-based deontology shares lots of DNA with the libertarian deontology tradition, which I gestured at in the last footnote. (See https://plato.stanford.edu/entries/ethics-deontological/#PatCenDeoThe for an overview.) Philosophers have been discussing this stuff at least since Nozick in the 1970s, so there’s lots of sophisticated material to draw on—I’d encourage boundaries/membranes fans to look at this literature before trying to reinvent everything from scratch.
The SEP article on republicanism also has some nice discussion of conceptual questions about non-interference and non-domination (https://plato.stanford.edu/entries/republicanism), which I think any approach along these lines will have to grapple with.
@Andrew_Critch and @davidad, I’d be interested in hearing more about your respective boundaritarian versions of deontology, especially with respect to AI safety applications!
FWIW, I think a morality based on minimizing «membrane/boundary»[1] violations could possibly avoid the issues outlined here. That is, a form of deontology where the rule is ~”respect the «membranes/boundaries» of sovereign agents”. (And I think this works because I think «membranes/boundaries» are universally observable.)
Relevant posts:
«Boundaries» for formalizing a bare-bones morality
«Boundaries/Membranes» and AI safety compilation
(see my other posts, too)
(e.g. my really hot «membranes/boundaries» answer to the fat man trolley problem)
I’m excited about «membranes/boundaries» because in one swoop it captures everything or almost everything that intuitively seems bad, but is otherwise hard to describe, about the examples here: https://arbital.com/p/low_impact/
For example, from ~”respect the «membranes/boundaries» of sovereign agents”, you naturally derive:
Don’t kill people
Don’t control people / violate sovereignty
Don’t interfere in other people’s problems without permission
Don’t coddle people
I’m working on this myself right now, though not entirely in a moral philosophy direction. If anyone wants to take this on (eg in a formal moral philosophy direction), I would be eager to help you!
I will continue to publish posts about this topic on my LW account, subscribe to my posts to get notified.
You can see «Boundaries» Sequence for a longer explanation, but I will excerpt from a more recent post by Andrew Critch, 2023 March:
Also, beware:
[Also, a tag exists for this «membranes/boundaries».]
Glad to have this flagged here, thanks. As I’ve said to @Chipmonk privately, I think this sort of boundaries-based deontology shares lots of DNA with the libertarian deontology tradition, which I gestured at in the last footnote. (See https://plato.stanford.edu/entries/ethics-deontological/#PatCenDeoThe for an overview.) Philosophers have been discussing this stuff at least since Nozick in the 1970s, so there’s lots of sophisticated material to draw on—I’d encourage boundaries/membranes fans to look at this literature before trying to reinvent everything from scratch.
The SEP article on republicanism also has some nice discussion of conceptual questions about non-interference and non-domination (https://plato.stanford.edu/entries/republicanism), which I think any approach along these lines will have to grapple with.
@Andrew_Critch and @davidad, I’d be interested in hearing more about your respective boundaritarian versions of deontology, especially with respect to AI safety applications!
For what it’s worth, the phrase “night watchman” as I use it is certainly downstream of Nozick’s concept.