I agree, and not just because it’s us deciding the rubric. I believe an objective sentient bystander would agree that there is some (important) measure by which we come out ahead. Meaning our utility needs a greater weight in the equation.
That is, if they are global utility maximizers. Incidentally, where does that assumption come from? It seems kind of strange. Are these utility maximizers just so social and empathetic they want everybody to be happy?
Are these utility maximizers just so social and empathetic they want everybody to be happy?
You could imagine the perfect global utility maximizer being created by self-modification of beings, or built by beings who desire such a maximizer.
Why would they want that in the first place? Prosocial emotions (e.g. caused by cooperation and kin selection instincts + altruistic memes) could be a starting point.
Another possible path is philosophical self-reflection. A self-modelling agent could model their utility as resulting from the valuation of mental states, e.g. a hedonist who thinks about what value is to him and concludes that what matters is the (un-)pleasantness of their brain states.
From there, you only need a few philosophical assumptions to generalize:
1) Mental states are time-local, the psychological present lasts maybe up to three seconds only.
2) Our selves are not immutable metaphysical entities, but physical system states that are being transformed considerably (from fetus to toddler to preteen to adult to mentally disabled).
3) Other beings share the crucial system properties (brains with (un-)plesantness); we even have common ancestors passing on the blueprints.
4) Hypothetically, though improbably, any being could be transformed into any other being in a gradual process by speculative technology (e.g. nano technology could tranform me into you, or a human into a chimp, or a pig etc.) without breaking life functions.
5) An agent might decide that it shouldn’t matter how a system state came about, only what properties the system state has, e.g. it shouldn’t matter to me whether you are a future version of me transformed by speculative technology starting with my current state, but only what properties your system states has (e.g. (un-)pleasantness)
I’m not claiming this is enough to beat everyday psychological egoism, but it could be enough for a philosopher-system to desire self-modification or the creation of an artificial global utility maximizer.
I agree, and not just because it’s us deciding the rubric. I believe an objective sentient bystander would agree that there is some (important) measure by which we come out ahead. Meaning our utility needs a greater weight in the equation.
That is, if they are global utility maximizers. Incidentally, where does that assumption come from? It seems kind of strange. Are these utility maximizers just so social and empathetic they want everybody to be happy?
You could imagine the perfect global utility maximizer being created by self-modification of beings, or built by beings who desire such a maximizer.
Why would they want that in the first place? Prosocial emotions (e.g. caused by cooperation and kin selection instincts + altruistic memes) could be a starting point.
Another possible path is philosophical self-reflection. A self-modelling agent could model their utility as resulting from the valuation of mental states, e.g. a hedonist who thinks about what value is to him and concludes that what matters is the (un-)pleasantness of their brain states.
From there, you only need a few philosophical assumptions to generalize:
1) Mental states are time-local, the psychological present lasts maybe up to three seconds only.
2) Our selves are not immutable metaphysical entities, but physical system states that are being transformed considerably (from fetus to toddler to preteen to adult to mentally disabled).
3) Other beings share the crucial system properties (brains with (un-)plesantness); we even have common ancestors passing on the blueprints.
4) Hypothetically, though improbably, any being could be transformed into any other being in a gradual process by speculative technology (e.g. nano technology could tranform me into you, or a human into a chimp, or a pig etc.) without breaking life functions.
5) An agent might decide that it shouldn’t matter how a system state came about, only what properties the system state has, e.g. it shouldn’t matter to me whether you are a future version of me transformed by speculative technology starting with my current state, but only what properties your system states has (e.g. (un-)pleasantness)
I’m not claiming this is enough to beat everyday psychological egoism, but it could be enough for a philosopher-system to desire self-modification or the creation of an artificial global utility maximizer.