I find it interesting to build simple toy models of the human utility function. In particular, I was thinking about the aggregation of value associated with other people. In utilitarianism this question is known as “population ethics” and is infamously plagued with paradoxes. However, I believe that is the result of trying to be impartial. Humans are very partial and this allows coherent ways of aggregation. Here is my toy model:
Let Alice be our viewpoint human. Consider all social interactions Alice has, categorized by some types or properties, and assign a numerical weight to each type of interaction. Let it(A,B)>0 be the weight of the interaction person A had with person B at time t (if there was no interaction at this time then it(A,B)=0). Then, we can define Alice’s affinity to Bob as
afft(Alice,Bob):=t∑s=−∞αt−sis(Alice,Bob)
Here α∈(0,1) is some constant. Ofc αt−s can be replaced by many other functions.
Now, we can the define the social distance of Alice to Bob as
Here β>0 is some constant, and the power law was chosen rather arbitrarily, there are many functions of aff that can work. Dead people should probably count in the infimum, but their influence wanes over time since they don’t interact with anyone (unless we count consciously thinking about a person as an interaction, which we might).
This is a time-dependent metric (or quasimetric, if we allow for asymmetric interactions such as thinking about someone or admiring someone from afar) on the set of people. If i is bounded and there is a bounded number of people Alice can interact with at any given time, then there is some C>1 s.t. the number of people within distance r from Alice is O(Cr). We now define the reward as
rt(Alice):=∑pλdt(Alice,p)wt(p)
Here λ∈(0,1C) is some constant and wt(p) is the “welfare” of person p at time t, or whatever is the source of value of people for Alice. Finally, the utility function is a time discounted sum of rewards, probably not geometric (because hyperbolic discounting is a thing). It is also appealing to make the decision rule to be minimax-regret over all sufficiently long time discount parameters, but this is tangential.
Notice how the utility function is automatically finite and bounded, and none of the weird paradoxes of population ethics and infinitary ethics crop up, even if there is an infinite number of people in the universe. I like to visualize people space a tiling of hyperbolic space, with Alice standing in the center of a Poincare or Beltrami-Klein model of it. Alice’s “measure of caring” is then proportional to volume in the model (this probably doesn’t correspond to exactly the same formula but it’s qualitatively right, and the formula is only qualitative anyway).
I find it interesting to build simple toy models of the human utility function. In particular, I was thinking about the aggregation of value associated with other people. In utilitarianism this question is known as “population ethics” and is infamously plagued with paradoxes. However, I believe that is the result of trying to be impartial. Humans are very partial and this allows coherent ways of aggregation. Here is my toy model:
Let Alice be our viewpoint human. Consider all social interactions Alice has, categorized by some types or properties, and assign a numerical weight to each type of interaction. Let it(A,B)>0 be the weight of the interaction person A had with person B at time t (if there was no interaction at this time then it(A,B)=0). Then, we can define Alice’s affinity to Bob as
afft(Alice,Bob):=t∑s=−∞αt−sis(Alice,Bob)
Here α∈(0,1) is some constant. Ofc αt−s can be replaced by many other functions.
Now, we can the define the social distance of Alice to Bob as
dt(Alice,Bob):=infp1…pn:p1=Alice,pn=Bobn−1∑k=1afft(pk,pk+1)−β
Here β>0 is some constant, and the power law was chosen rather arbitrarily, there are many functions of aff that can work. Dead people should probably count in the infimum, but their influence wanes over time since they don’t interact with anyone (unless we count consciously thinking about a person as an interaction, which we might).
This is a time-dependent metric (or quasimetric, if we allow for asymmetric interactions such as thinking about someone or admiring someone from afar) on the set of people. If i is bounded and there is a bounded number of people Alice can interact with at any given time, then there is some C>1 s.t. the number of people within distance r from Alice is O(Cr). We now define the reward as
rt(Alice):=∑pλdt(Alice,p)wt(p)
Here λ∈(0,1C) is some constant and wt(p) is the “welfare” of person p at time t, or whatever is the source of value of people for Alice. Finally, the utility function is a time discounted sum of rewards, probably not geometric (because hyperbolic discounting is a thing). It is also appealing to make the decision rule to be minimax-regret over all sufficiently long time discount parameters, but this is tangential.
Notice how the utility function is automatically finite and bounded, and none of the weird paradoxes of population ethics and infinitary ethics crop up, even if there is an infinite number of people in the universe. I like to visualize people space a tiling of hyperbolic space, with Alice standing in the center of a Poincare or Beltrami-Klein model of it. Alice’s “measure of caring” is then proportional to volume in the model (this probably doesn’t correspond to exactly the same formula but it’s qualitatively right, and the formula is only qualitative anyway).