I’d like to add that there isn’t really a clear objective boundary between an agent and the environment. It’s a subjective line that we draw in the sand. So we needn’t get hung on what is objectively true or false when it comes to boundaries—and instead define them in a way that aligns with human values.
I’d like to add that there isn’t really a clear objective boundary between an agent and the environment. It’s a subjective line that we draw in the sand. So we needn’t get hung on what is objectively true or false when it comes to boundaries—and instead define them in a way that aligns with human values.
I haven’t written the following on LW before, but I’m interested in finding the setup that minimizes conflict.
(An agent can create conflict for itself when it perceives its sense of self as in some regard too large, and/or as in some regard too small.)
Also I must say: I really don’t think it’s (straightforwardly) subjective.
Related: “Membranes” is better terminology than “boundaries” alone