Do you see those rules as ends in and of themselves, or do you see them as the most effective means to achieving the end of “maximize the weighted total utility of all sentient beings”? Or maybe just guidelines you use in order to achieve the end of “maximize the weighted total utility of all sentient beings”?
I think that the “rights” idea is the starting point—it is good in itself for a sentient being (an entity which possesses qualia, such as all animals and possibly some non-animal life forms and AIs, depending on how consciousness works) to get what it wants—to have its utility function maximized—and if it cannot verbally describe its desires, a proxy for this is pleasure versus pain, the way the organism evolved to live, the kinds of decisions the organism can be observed generally making, etc.
The amount of this right a being possesses is proportional to its capacity for conscious experience—the intensity and perhaps also variety of its qualia. So humans would individually score only slightly higher than most other mammals, due to having equally intense emotions but more types of qualia due to our being the basis of a memetic ecosystem—and the total amount of rights on the planet belonging to nonhumans vastly outweighs the amount of rights collectively owned by humans. (Many people on LessWrong likely disagree with me on that.)
Meanwhile, the amount of responsibility to protect this right a being possesses is proportional to its capacity to influence the future trajectory of the world times its capacity to understand this concept—meaning humans have nearly all the moral responsibility which exists on the planet currently, though AIs will soon have a hefty chunk of it and will eventually far outstrip us in responsibility levels. (This implies that the ideal thing to do is to uplift all other life forms so they can take care of themselves, find and obtain new sources of subjective value far beyond what they could experience as they are, and in the process relieve ourselves and our AIs of the burden of stewardship.)
The “consequentialism” comes from the basic idea that every entity with such responsibility ought to strive to maximize its total positive impact on that fundamental right in all beings, weighted by their ownership of that right and by its own responsibility to uphold it—such as by supporting AI alignment, abolitionism, etc. (This could be described as, I think we have the responsibility to implement the coherent extrapolated volition of all living things. Our own failure to align to that to me rather obvious ethical imperative suggests a gloomy prospect for our AI children!)
Do you see those rules as ends in and of themselves, or do you see them as the most effective means to achieving the end of “maximize the weighted total utility of all sentient beings”? Or maybe just guidelines you use in order to achieve the end of “maximize the weighted total utility of all sentient beings”?
I think that the “rights” idea is the starting point—it is good in itself for a sentient being (an entity which possesses qualia, such as all animals and possibly some non-animal life forms and AIs, depending on how consciousness works) to get what it wants—to have its utility function maximized—and if it cannot verbally describe its desires, a proxy for this is pleasure versus pain, the way the organism evolved to live, the kinds of decisions the organism can be observed generally making, etc.
The amount of this right a being possesses is proportional to its capacity for conscious experience—the intensity and perhaps also variety of its qualia. So humans would individually score only slightly higher than most other mammals, due to having equally intense emotions but more types of qualia due to our being the basis of a memetic ecosystem—and the total amount of rights on the planet belonging to nonhumans vastly outweighs the amount of rights collectively owned by humans. (Many people on LessWrong likely disagree with me on that.)
Meanwhile, the amount of responsibility to protect this right a being possesses is proportional to its capacity to influence the future trajectory of the world times its capacity to understand this concept—meaning humans have nearly all the moral responsibility which exists on the planet currently, though AIs will soon have a hefty chunk of it and will eventually far outstrip us in responsibility levels. (This implies that the ideal thing to do is to uplift all other life forms so they can take care of themselves, find and obtain new sources of subjective value far beyond what they could experience as they are, and in the process relieve ourselves and our AIs of the burden of stewardship.)
The “consequentialism” comes from the basic idea that every entity with such responsibility ought to strive to maximize its total positive impact on that fundamental right in all beings, weighted by their ownership of that right and by its own responsibility to uphold it—such as by supporting AI alignment, abolitionism, etc. (This could be described as, I think we have the responsibility to implement the coherent extrapolated volition of all living things. Our own failure to align to that to me rather obvious ethical imperative suggests a gloomy prospect for our AI children!)
That all sounds pretty reasonable :)