Hey thanks for the link Richard that was an interesting read. There definitely seems to be some similarities.
I was actually thinking about what we want to tile the future lightcone with the other day. This was the progression I saw:
Conventional Morality :: Do what feels right without thinking much about it.
Utilitarianism I :: The atomic unit of “goodness” and “badness” is the valence of human experience. The valence of experience across all humans matters equally. The suffering of a child in Africa matters just as much as the suffering of my neighbor.
Utilitarianism II :: The valence of experience across all sentient things matters equally. i.e. The suffering of cows matters too.
Utilitarianism III :: The valence of experience across all sentient things across time matters equally. The suffering of sentient things in the future matters just as much as the suffering of my neighbor today. i.e. longtermism
Utilitarianism IV :: Understanding valence and consciousness takes a lexicographical preference over any attempt to improve the valence of sentient things as we understand it today because only with this better understanding can we efficiently maximize the valence of sentient things. i.e. veganism is only helpful in its ability to speed up our ability to understand consciousness and release a utilitron shockwave. Everything before the utilitron shockwave can be rounded to zero.
Utiltiarianism V :: Upon understanding consciousness, we can expect to have our preferences significantly shaken in a way that we can’t hope to properly anticipate (we can’t expect to have properly understood our preferences with such a weak understanding of “reality”). The lexicographical preference then becomes understanding consciousness and making the “right” decision on what to do next upon understanding it. In this case, it would mean that all of our “moral” actions were only good in so far as their contribution to this revelation and making the “right” decision upon understanding consciousness.
Utilitarianism VI :: ?
Utilitarianism V has some similarities to tiling the future lightcone with copies of yourself which can then execute based on their updated preferences in the future.
But “yourself” is really just a collection of memes. It will be the memes that are propagating themselves like a virus. There’s no real coherent persistent definition of “yourself”.
What do you want to tile the future lightcone with?
Hey thanks for the link Richard that was an interesting read. There definitely seems to be some similarities.
I was actually thinking about what we want to tile the future lightcone with the other day. This was the progression I saw:
Conventional Morality :: Do what feels right without thinking much about it.
Utilitarianism I :: The atomic unit of “goodness” and “badness” is the valence of human experience. The valence of experience across all humans matters equally. The suffering of a child in Africa matters just as much as the suffering of my neighbor.
Utilitarianism II :: The valence of experience across all sentient things matters equally. i.e. The suffering of cows matters too.
Utilitarianism III :: The valence of experience across all sentient things across time matters equally. The suffering of sentient things in the future matters just as much as the suffering of my neighbor today. i.e. longtermism
Utilitarianism IV :: Understanding valence and consciousness takes a lexicographical preference over any attempt to improve the valence of sentient things as we understand it today because only with this better understanding can we efficiently maximize the valence of sentient things. i.e. veganism is only helpful in its ability to speed up our ability to understand consciousness and release a utilitron shockwave. Everything before the utilitron shockwave can be rounded to zero.
Utiltiarianism V :: Upon understanding consciousness, we can expect to have our preferences significantly shaken in a way that we can’t hope to properly anticipate (we can’t expect to have properly understood our preferences with such a weak understanding of “reality”). The lexicographical preference then becomes understanding consciousness and making the “right” decision on what to do next upon understanding it. In this case, it would mean that all of our “moral” actions were only good in so far as their contribution to this revelation and making the “right” decision upon understanding consciousness.
Utilitarianism VI :: ?
Utilitarianism V has some similarities to tiling the future lightcone with copies of yourself which can then execute based on their updated preferences in the future.
But “yourself” is really just a collection of memes. It will be the memes that are propagating themselves like a virus. There’s no real coherent persistent definition of “yourself”.
What do you want to tile the future lightcone with?