I’m not following your final point. Regardless of determinism, the “state space” I can explore as an embedded agent is constrained by the properties of the local environment. If I value things like a walkable neighborhood, but I’m stuck in a pile of rubble, that’s going to constrain my available state space and accordingly it’s going to constrain my ability to have any rewarding outcome. McTraffic, by not allotting freedoms to me when executing their transportation redesign impeded on my freedom (which was mostly afforded to me through my and my neighbors property rights).
Freedoms (properly encoded), I believe are the proper framing for creating utility functions/value-systems for critters like our friendly neighborhood traffic agent. Sure, the traffic agent values transportation efficiency, but since it also values other agent’s freedom to property rights, they will limit their execution of their traffic efficiency preferences within a multi-agent shared environment to minimize the restriction to property rights. To me, this seems simpler, and less error prone than any approach that tries to infer my values (or human preferences more generally) and act according to that inference.
Freedoms assume awareness of external (embedded) agency, they are values you afford to other agents. They have a payoff because you are then afforded them back. This helps to ensure agents do not unilaterally bulldoze (literally or figuratively) the “available state space” for other agents to explore and exploit.
If I value things like a walkable neighborhood, but I’m stuck in a pile of rubble, that’s going to constrain my available state space and accordingly it’s going to constrain my ability to have any rewarding outcome.
And if you value rubble, having it replaced by a walkable neighborhood would constrain your available state space. It’s symmetrical.
I worry you are just “seeing the freedom” inherent in the neighborhood more easily because you like freedom and you also like walkable neighborhoods. But this leads you to picking examples where the two different kinds of liking are all mixed up.
Sure, the traffic agent values transportation efficiency, but since it also values other agent’s freedom to property rights, they will limit their execution of their traffic efficiency preferences within a multi-agent shared environment to minimize the restriction to property rights. To me, this seems simpler, and less error prone than any approach that tries to infer my values (or human preferences more generally) and act according to that inference.
But this precisely is an approach that tries to infer your values! It has to model you a certain way (as having certain freedoms like turning right vs. left, but not other freedoms like being able to fly). And I like the vision overall, but I think if you make it too strict you’ll end up with an AI that’s not making choices to defend a certain state of affairs, and so it’s going to do silly things and get outcompeted by other forces.
Ah ok, I think I’m following you. To me, freedom describes a kind of bubble around a certain physical or abstract dimension, who’s center is at another agent. It’s main use is to limit computational complexity when sharing an environment with other agents. If I have a set of freedom values, I don’t have to infer the values of the agent so long as I don’t enter their freedom bubbles. In the traffic example, how the neighborhood is constructed should be irrelevant to McTraffic, all it needs to know is a) there are other agents present in the neighborhood already, and b) it wants to change the nature of the neighborhood, which will enter the other agent’s freedom bubbles. Therefore it needs to to negotiate with the inhabitants (so yes, at this step there’s an inference via dialogue going on).
I’m not following your final point. Regardless of determinism, the “state space” I can explore as an embedded agent is constrained by the properties of the local environment. If I value things like a walkable neighborhood, but I’m stuck in a pile of rubble, that’s going to constrain my available state space and accordingly it’s going to constrain my ability to have any rewarding outcome. McTraffic, by not allotting freedoms to me when executing their transportation redesign impeded on my freedom (which was mostly afforded to me through my and my neighbors property rights).
Freedoms (properly encoded), I believe are the proper framing for creating utility functions/value-systems for critters like our friendly neighborhood traffic agent. Sure, the traffic agent values transportation efficiency, but since it also values other agent’s freedom to property rights, they will limit their execution of their traffic efficiency preferences within a multi-agent shared environment to minimize the restriction to property rights. To me, this seems simpler, and less error prone than any approach that tries to infer my values (or human preferences more generally) and act according to that inference.
Freedoms assume awareness of external (embedded) agency, they are values you afford to other agents. They have a payoff because you are then afforded them back. This helps to ensure agents do not unilaterally bulldoze (literally or figuratively) the “available state space” for other agents to explore and exploit.
And if you value rubble, having it replaced by a walkable neighborhood would constrain your available state space. It’s symmetrical.
I worry you are just “seeing the freedom” inherent in the neighborhood more easily because you like freedom and you also like walkable neighborhoods. But this leads you to picking examples where the two different kinds of liking are all mixed up.
But this precisely is an approach that tries to infer your values! It has to model you a certain way (as having certain freedoms like turning right vs. left, but not other freedoms like being able to fly). And I like the vision overall, but I think if you make it too strict you’ll end up with an AI that’s not making choices to defend a certain state of affairs, and so it’s going to do silly things and get outcompeted by other forces.
Ah ok, I think I’m following you. To me, freedom describes a kind of bubble around a certain physical or abstract dimension, who’s center is at another agent. It’s main use is to limit computational complexity when sharing an environment with other agents. If I have a set of freedom values, I don’t have to infer the values of the agent so long as I don’t enter their freedom bubbles. In the traffic example, how the neighborhood is constructed should be irrelevant to McTraffic, all it needs to know is a) there are other agents present in the neighborhood already, and b) it wants to change the nature of the neighborhood, which will enter the other agent’s freedom bubbles. Therefore it needs to to negotiate with the inhabitants (so yes, at this step there’s an inference via dialogue going on).