For purposes of morality or decision making, environments that border membranes are a better building block for scopes of caring than whole (possible) worlds, which traditionally fill this role. So it’s not quite a particular bare-bones morality, but more of a shared scaffolding that different agents can paint their moralities on. Agreement on boundaries is a step towards cooperation in terms of scopes of caring delimited by these boundaries. Different environments then get optimized according to different preferences, according to coalitions of agents that own them.
The preposterous thing that this frame seemingly requires is not-caring about some environments, or caring about different environments in different ways. Traditionally this is expressed in terms of possible worlds, so that instead of different environments you work with different (sets of) possible worlds, and possible worlds can have different credences and utilities. But different environments more conspicuously coexist and interact with each other, they don’t look like mutually exclusive alternatives. Newtonian ethics becomes a valid possibility, putting value on things depending on where they occur, not just on what they are.
This turns less preposterous in context of updateless decision making, where possible worlds similarly coexist and interact, and we need to deal with this anyway, so regressing to coexisting and interacting environments becomes less of a loss to utility of the frame. Unfortunately, you also get logical dependencies that muck up the abstraction of separation between environments. I don’t know what to do about this, apart from declaring things that introduce logical dependencies as themselves being membranes, but my preliminary impression is that membranes and environments might be more useful than agents and possible worlds for sorting this out.
To be clear, I think the bare-bones morality in that post comes from “observe boundaries and then try not to violate them” (or, in Davidad’s case: and then proactively defend them (which is stronger)).
I’ll need to think about the rest of your comment more, hm. If you think of examples please lmk:) Also, wdym that a logical dependency could be itself a membrane? Eg?
One thing— I think the «membranes/boundaries» generator would probably reject the “you get to choose worlds” premise, for that choice is not within your «membrane/boundary». Instead, there’s a more decentralized (albeit much smaller) thing that is within your «membrane» that you do have control over. (Similar example here wrt to Alex, global poverty, and his donations.)
For purposes of morality or decision making, environments that border membranes are a better building block for scopes of caring than whole (possible) worlds, which traditionally fill this role. So it’s not quite a particular bare-bones morality, but more of a shared scaffolding that different agents can paint their moralities on. Agreement on boundaries is a step towards cooperation in terms of scopes of caring delimited by these boundaries. Different environments then get optimized according to different preferences, according to coalitions of agents that own them.
The preposterous thing that this frame seemingly requires is not-caring about some environments, or caring about different environments in different ways. Traditionally this is expressed in terms of possible worlds, so that instead of different environments you work with different (sets of) possible worlds, and possible worlds can have different credences and utilities. But different environments more conspicuously coexist and interact with each other, they don’t look like mutually exclusive alternatives. Newtonian ethics becomes a valid possibility, putting value on things depending on where they occur, not just on what they are.
This turns less preposterous in context of updateless decision making, where possible worlds similarly coexist and interact, and we need to deal with this anyway, so regressing to coexisting and interacting environments becomes less of a loss to utility of the frame. Unfortunately, you also get logical dependencies that muck up the abstraction of separation between environments. I don’t know what to do about this, apart from declaring things that introduce logical dependencies as themselves being membranes, but my preliminary impression is that membranes and environments might be more useful than agents and possible worlds for sorting this out.
To be clear, I think the bare-bones morality in that post comes from “observe boundaries and then try not to violate them” (or, in Davidad’s case: and then proactively defend them (which is stronger)).
I’ll need to think about the rest of your comment more, hm. If you think of examples please lmk:)
Also, wdym that a logical dependency could be itself a membrane? Eg?
One thing— I think the «membranes/boundaries» generator would probably reject the “you get to choose worlds” premise, for that choice is not within your «membrane/boundary». Instead, there’s a more decentralized (albeit much smaller) thing that is within your «membrane» that you do have control over. (Similar example here wrt to Alex, global poverty, and his donations.)