Pain and pleasure as moral standards do not appeal to me. They are easily manipulated by drugs, and can lead to results such as killing sick people against their will.
To me, life and death is much more interesting. There are issues in defining which lives are to be saved, what it means for a life to be “maximized”, what life actually is, and so on. I propose trying to remain as objective as possible, and defining life through physics and information theory (think negentropy, Schrödinger’s “What is Life” and related works). I am not skilled in any of these sciences, so my chances of being less wrong in details of this are slim. But what I envision is something like “Maximize (universally) the amount of computation energy causes before dissolving into high entropy”, or “Maximize (universally) the number of orderly/non-chaotic events”. Probably severely wrong, but I hope you get the idea.
I suppose that some rules/actions that may contribute to this goal (not considering all consequences), are:
- Minimizing killing.
- Having lots of children.
- Sustainable agriculture.
- Utilizing solar energy in deserts.
- Building computers.
- Production rather than consumption.
- Colonizing space.
and, ultimately, creating superintelligence, even if it means the end to humanity.
This, to me, is the ultimate altruistic utilitarianism. I don’t think I’m an utilitarian, though… But I wonder if some of you clever people have any insights to contribute with, helping me in getting to be less wrong?
(Placing the following in parentheses is an invitation to discuss this part enclosed in parentheses and doing the main discussion in the wild.
There are other ideas that appeal more to me personally:
- Some kind of justice utilitarianism. That is, justice is defined out of people’s (or other decent entities’) self interest (survival, pleasure and pain, health, wealth, etc.) and the action relations between people (such as “He hit me first”). Then the universal goal is maximizing justice (reward and punish) and minimizing injustice (protect innocents).
- Rational egoism based on maximizing learning.
- Less attention paid to particular principles and more to everyday responsibilities, subjective conscience, and natural and social conformity.
Last, but not least, focus on staying human and protecting humanity. Maybe extending it both upwards (think AGI) and downwards (to some other species and to human embryos), but protecting the weakness necessary for securing truthful social relations. Protecting weakness means suppressing potential power, most importantly unfriendly AI.
Maximizing life universally
Pain and pleasure as moral standards do not appeal to me. They are easily manipulated by drugs, and can lead to results such as killing sick people against their will.
To me, life and death is much more interesting. There are issues in defining which lives are to be saved, what it means for a life to be “maximized”, what life actually is, and so on. I propose trying to remain as objective as possible, and defining life through physics and information theory (think negentropy, Schrödinger’s “What is Life” and related works). I am not skilled in any of these sciences, so my chances of being less wrong in details of this are slim. But what I envision is something like “Maximize (universally) the amount of computation energy causes before dissolving into high entropy”, or “Maximize (universally) the number of orderly/non-chaotic events”. Probably severely wrong, but I hope you get the idea.
I suppose that some rules/actions that may contribute to this goal (not considering all consequences), are:
- Minimizing killing.
- Having lots of children.
- Sustainable agriculture.
- Utilizing solar energy in deserts.
- Building computers.
- Production rather than consumption.
- Colonizing space.
and, ultimately, creating superintelligence, even if it means the end to humanity.
This, to me, is the ultimate altruistic utilitarianism. I don’t think I’m an utilitarian, though… But I wonder if some of you clever people have any insights to contribute with, helping me in getting to be less wrong?
(Placing the following in parentheses is an invitation to discuss this part enclosed in parentheses and doing the main discussion in the wild.
There are other ideas that appeal more to me personally:
- Some kind of justice utilitarianism. That is, justice is defined out of people’s (or other decent entities’) self interest (survival, pleasure and pain, health, wealth, etc.) and the action relations between people (such as “He hit me first”). Then the universal goal is maximizing justice (reward and punish) and minimizing injustice (protect innocents).
- Rational egoism based on maximizing learning.
- Less attention paid to particular principles and more to everyday responsibilities, subjective conscience, and natural and social conformity.
Last, but not least, focus on staying human and protecting humanity. Maybe extending it both upwards (think AGI) and downwards (to some other species and to human embryos), but protecting the weakness necessary for securing truthful social relations. Protecting weakness means suppressing potential power, most importantly unfriendly AI.
)