I’m probably something like a rule consequentialist (which feels like a mixture of consequentialism and deontology), in that while I want to maximize the weighted total utility of all sentient beings, I want to do it while obeying strict moral rules in most cases (the ends do not automatically justify the means in every case).
Specifically I think the foundational rules are “don’t affect another sentient being in a way they didn’t give you permission to” and “don’t break your promises”, with caveats (which I am not sure how to specify rigorously) for the fact that there are situations where it is necessary and reasonable to break those rules—and nearly every other moral principle falls out of those two. Really they’re the same rule stated in two different ways—“take only actions which everyone affected deems acceptable”, given that your past self is affected by your present self’s actions and can thus influence which ones are acceptable by making promises.
Then my consequentialism could be restated as “maximize the degree to which this moral principle is followed in the universe.”
Do you see those rules as ends in and of themselves, or do you see them as the most effective means to achieving the end of “maximize the weighted total utility of all sentient beings”? Or maybe just guidelines you use in order to achieve the end of “maximize the weighted total utility of all sentient beings”?
I think that the “rights” idea is the starting point—it is good in itself for a sentient being (an entity which possesses qualia, such as all animals and possibly some non-animal life forms and AIs, depending on how consciousness works) to get what it wants—to have its utility function maximized—and if it cannot verbally describe its desires, a proxy for this is pleasure versus pain, the way the organism evolved to live, the kinds of decisions the organism can be observed generally making, etc.
The amount of this right a being possesses is proportional to its capacity for conscious experience—the intensity and perhaps also variety of its qualia. So humans would individually score only slightly higher than most other mammals, due to having equally intense emotions but more types of qualia due to our being the basis of a memetic ecosystem—and the total amount of rights on the planet belonging to nonhumans vastly outweighs the amount of rights collectively owned by humans. (Many people on LessWrong likely disagree with me on that.)
Meanwhile, the amount of responsibility to protect this right a being possesses is proportional to its capacity to influence the future trajectory of the world times its capacity to understand this concept—meaning humans have nearly all the moral responsibility which exists on the planet currently, though AIs will soon have a hefty chunk of it and will eventually far outstrip us in responsibility levels. (This implies that the ideal thing to do is to uplift all other life forms so they can take care of themselves, find and obtain new sources of subjective value far beyond what they could experience as they are, and in the process relieve ourselves and our AIs of the burden of stewardship.)
The “consequentialism” comes from the basic idea that every entity with such responsibility ought to strive to maximize its total positive impact on that fundamental right in all beings, weighted by their ownership of that right and by its own responsibility to uphold it—such as by supporting AI alignment, abolitionism, etc. (This could be described as, I think we have the responsibility to implement the coherent extrapolated volition of all living things. Our own failure to align to that to me rather obvious ethical imperative suggests a gloomy prospect for our AI children!)
I’m probably something like a rule consequentialist (which feels like a mixture of consequentialism and deontology), in that while I want to maximize the weighted total utility of all sentient beings, I want to do it while obeying strict moral rules in most cases (the ends do not automatically justify the means in every case).
Specifically I think the foundational rules are “don’t affect another sentient being in a way they didn’t give you permission to” and “don’t break your promises”, with caveats (which I am not sure how to specify rigorously) for the fact that there are situations where it is necessary and reasonable to break those rules—and nearly every other moral principle falls out of those two. Really they’re the same rule stated in two different ways—“take only actions which everyone affected deems acceptable”, given that your past self is affected by your present self’s actions and can thus influence which ones are acceptable by making promises.
Then my consequentialism could be restated as “maximize the degree to which this moral principle is followed in the universe.”
Do you see those rules as ends in and of themselves, or do you see them as the most effective means to achieving the end of “maximize the weighted total utility of all sentient beings”? Or maybe just guidelines you use in order to achieve the end of “maximize the weighted total utility of all sentient beings”?
I think that the “rights” idea is the starting point—it is good in itself for a sentient being (an entity which possesses qualia, such as all animals and possibly some non-animal life forms and AIs, depending on how consciousness works) to get what it wants—to have its utility function maximized—and if it cannot verbally describe its desires, a proxy for this is pleasure versus pain, the way the organism evolved to live, the kinds of decisions the organism can be observed generally making, etc.
The amount of this right a being possesses is proportional to its capacity for conscious experience—the intensity and perhaps also variety of its qualia. So humans would individually score only slightly higher than most other mammals, due to having equally intense emotions but more types of qualia due to our being the basis of a memetic ecosystem—and the total amount of rights on the planet belonging to nonhumans vastly outweighs the amount of rights collectively owned by humans. (Many people on LessWrong likely disagree with me on that.)
Meanwhile, the amount of responsibility to protect this right a being possesses is proportional to its capacity to influence the future trajectory of the world times its capacity to understand this concept—meaning humans have nearly all the moral responsibility which exists on the planet currently, though AIs will soon have a hefty chunk of it and will eventually far outstrip us in responsibility levels. (This implies that the ideal thing to do is to uplift all other life forms so they can take care of themselves, find and obtain new sources of subjective value far beyond what they could experience as they are, and in the process relieve ourselves and our AIs of the burden of stewardship.)
The “consequentialism” comes from the basic idea that every entity with such responsibility ought to strive to maximize its total positive impact on that fundamental right in all beings, weighted by their ownership of that right and by its own responsibility to uphold it—such as by supporting AI alignment, abolitionism, etc. (This could be described as, I think we have the responsibility to implement the coherent extrapolated volition of all living things. Our own failure to align to that to me rather obvious ethical imperative suggests a gloomy prospect for our AI children!)
That all sounds pretty reasonable :)