Strongly agree. I see many, many others use “intelligence” as their source of value for life—i.e humans are sentient creatures and therefore worth something—without seriously considering the consequences and edge cases of that decision. Perhaps this view is popularized ny science fiction that used interspecies xenophobia as an allegory for racism; nonetheless, it’s a somewhat extreme position to stick too if you genuinely believe in it. I shared a similar opinion a couple of years ago, but decided to shift it to a human-focused terminal value months back because I did not like the conclusions it generated when taken to its logical conclusion with present and future society.
Yes, seeing intelligence alone is problematic already when applied to humans—should mentally disabled people have less rights? If a genius kills a person of average intelligence, should they get away scot-free? Obviously makes no sense. There are extreme cases like e.g. babies born without a brain entirely who are essentially considered clinically dead, but those lack far more than just intelligence.
But also, in addition, yes, even with aliens it’s not like intelligence or any other purely cognitive ability is enough. Even in fiction, the Daleks are intelligent, the Borg are intelligent, but coexistence with them is fundamentally impossible. The things that make us able to get along are subtler than that.
That is certainly both de facto and de jure true in most jurisdictions, leaving aside the is-ought question for a moment. What use is the right to education to someone who can’t ever learn to read or write no matter how hard you try and coach them? Or freedom of speech to those who lack complex cognition at all?
Personally, I have no compunctions about tying a large portion of someone’s moral worth to their intelligence, if not all of it. Certainly not to the extent I’d prefer a superintelligent alien over a fellow baseline human, unless by some miracle the former almost perfectly aligns with my goals and ideals.
That is certainly both de facto and de jure true in most jurisdictions, leaving aside the is-ought question for a moment.
I mean, fair, but not human rights—I was thinking more, they still aren’t treated as animals with no right to life. Mentally disabled people are more in the legal position of permanent children; they have rights, but are also considered unable to fully exert them and are thus put under some guardian’s responsibility.
We shouldn’t create it, and if we do, we should end it’s existence. Or reprogram it if possible. I don’t think any of those things are inconsistent with centering moral consideration around the capacity to experience suffering and wellbeing.
What is ‘suffering’? If I paint the phrases “too hot’ and ‘too cold’ at either end of the thermometer that’s part of a thermostat’s feedback loop, is it ‘suffering’ when the temperature isn’t at it’s desired optimum? It fights back if you leave the window open, and has O(1 bit-worth) of intelligence. What properties of a physical system should entitle it to moral worth, such that it not getting its way will be called suffering?
Capacity for a biological process that appears functionally equivalent to human suffering is something that most multicellular animals clearly have, but still we don’t give them the right to copyright, or most other human rights in our current legal system. We raise and kill certain animals for their meat, in large numbers: we just require that this is done without unnecessary cruelty. We have rules about minimum animal pen sizes, for example: not very generous ones.
My proposal is that it should be a combination of a) being the outcome of Darwinian evolution that makes not getting your preferences into ‘suffering’, and b) the capacity for sufficient intelligence (over some threshold) that entitles you to related full legal rights.
This is a moral proposal. I don’t believe in moral absolutism, or that ‘suffering’ has an unambiguous mathematically definable ‘true name’. I see this as a suggestion for a way of structuring a society, so I’m looking for criticisms like “that guiding principle would likely produce these effects on a society using it, which feels undesirable to me because…”
I think it’s not necessarily easy to know when something is suffering from the outside, but I still think it’s the best standard.
most multicellular animals clearly have, but still we don’t give them the right to copyright
I possibly should have clarified I’m moreso talking about the standard for moral consideration, I think if we ever created an AI entity capable of making art that also has the capacity for qualia states, I don’t think copyright rights will be relevant anymore.
We raise and kill certain animals for their meat, in large numbers
We shouldn’t be doing this.
we just require that this is done without unnecessary cruelty.
This isn’t true for the vast majority of industrial agriculture. In practice there are virtually no restraints for the treatment of most animals.
My proposal is that it should be a combination of a) being the outcome of Darwinian evolution that makes not getting your preferences into ‘suffering’, and b) the capacity for sufficient intelligence (over some threshold) that entitles you to related full legal rights
Why Darwinian evolution? Because it’s hard to know if it’s suffering otherwise?
I think rights should be based on capacity for intelligence in certain circumstances where it’s relevant. I don’t think a pig should be able to vote in an election, because it wouldn’t be able to comprehend that, but it should have the right not to be tortured and exploited.
Why Darwinian evolution? Because it’s hard to know if it’s suffering otherwise?
I’m proposing a society in which living things, or sufficiently detailed emulations of them, and especially sapient ones, have preferred moral and legal status. I’m reasonably confident that for something complex and mobile with senses, Darwinian evolution will generally produce mechanisms that act like pain and suffering, for pretty obvious reasons. So I’m proposing a definition of ‘suffering’ rooted in evolutionary theory, and only applicable to living things, or emulations/systems sufficintly closely derived from them. If you emulate such a system, I’m proposing that we worry about its suffering to the extent that it’s a sufficiently detailed emulation still functioning in its naturally-evolved design. For example I’m suggesting that a current-scale LLM doing next-token generation of the pleadings of a torture victim not be counted as suffering for legal/moral purposes: IMO the inner emulation of a human it’s running isn’t (pretty clearly based on parameter count to, say, synapse count) a sufficiently close simulation of a biological organism that we should consider it’s behavior as ‘suffering’: for example, no simulations of pain centers are included. Increase the accuracy of simulation sufficiently, and there comes a point (details TBD by a society where this matters) where that ceases to be true.
So, if someone wants a particular policy enacted, and uses sufficient computational resources to simulate 10^12 separate and distinct sapient kittens-girls who have all been edited so that they will suffer greatly if this policy isn’t enacted, we shouldn’t encourage that sort of moral blackmail or ballot-stuffing. I don’t think they should be able to win the vote or utilitarian decision-making balance just by custom-making a lot of new voters/citizens: it’s a clear instability in anything resembling a democracy or that uses utilitarian ethics. I might even go so far as to suggest that that Darwinian evolution cannot have happened ‘in silico’, or at least that if it did it must be a very accurate simulation of a real physical environment that hasn’t been tweaked to produce some convenient outcome. So even if they expend the computational resources to in-silico evolve 10^12 separate and distinct sapient kitten-girls who will otherwise suffer greatly, that’s still moral blackmail. If you want to stuff the electorate with supporters, I think you should have to do it the old-fashioned way, by physically breeding and raising them — mostly because this is expensive enough to be impractical.
Strongly agree. I see many, many others use “intelligence” as their source of value for life—i.e humans are sentient creatures and therefore worth something—without seriously considering the consequences and edge cases of that decision. Perhaps this view is popularized ny science fiction that used interspecies xenophobia as an allegory for racism; nonetheless, it’s a somewhat extreme position to stick too if you genuinely believe in it. I shared a similar opinion a couple of years ago, but decided to shift it to a human-focused terminal value months back because I did not like the conclusions it generated when taken to its logical conclusion with present and future society.
Yes, seeing intelligence alone is problematic already when applied to humans—should mentally disabled people have less rights? If a genius kills a person of average intelligence, should they get away scot-free? Obviously makes no sense. There are extreme cases like e.g. babies born without a brain entirely who are essentially considered clinically dead, but those lack far more than just intelligence.
But also, in addition, yes, even with aliens it’s not like intelligence or any other purely cognitive ability is enough. Even in fiction, the Daleks are intelligent, the Borg are intelligent, but coexistence with them is fundamentally impossible. The things that make us able to get along are subtler than that.
That is certainly both de facto and de jure true in most jurisdictions, leaving aside the is-ought question for a moment. What use is the right to education to someone who can’t ever learn to read or write no matter how hard you try and coach them? Or freedom of speech to those who lack complex cognition at all?
Personally, I have no compunctions about tying a large portion of someone’s moral worth to their intelligence, if not all of it. Certainly not to the extent I’d prefer a superintelligent alien over a fellow baseline human, unless by some miracle the former almost perfectly aligns with my goals and ideals.
I mean, fair, but not human rights—I was thinking more, they still aren’t treated as animals with no right to life. Mentally disabled people are more in the legal position of permanent children; they have rights, but are also considered unable to fully exert them and are thus put under some guardian’s responsibility.
Why not capacity to suffer?
Someone creates an utility monster AI that suffers if it can’t disassemble the Earth. Should we care? Or just end its misery?
We shouldn’t create it, and if we do, we should end it’s existence. Or reprogram it if possible. I don’t think any of those things are inconsistent with centering moral consideration around the capacity to experience suffering and wellbeing.
What is ‘suffering’? If I paint the phrases “too hot’ and ‘too cold’ at either end of the thermometer that’s part of a thermostat’s feedback loop, is it ‘suffering’ when the temperature isn’t at it’s desired optimum? It fights back if you leave the window open, and has O(1 bit-worth) of intelligence. What properties of a physical system should entitle it to moral worth, such that it not getting its way will be called suffering?
Capacity for a biological process that appears functionally equivalent to human suffering is something that most multicellular animals clearly have, but still we don’t give them the right to copyright, or most other human rights in our current legal system. We raise and kill certain animals for their meat, in large numbers: we just require that this is done without unnecessary cruelty. We have rules about minimum animal pen sizes, for example: not very generous ones.
My proposal is that it should be a combination of a) being the outcome of Darwinian evolution that makes not getting your preferences into ‘suffering’, and b) the capacity for sufficient intelligence (over some threshold) that entitles you to related full legal rights.
This is a moral proposal. I don’t believe in moral absolutism, or that ‘suffering’ has an unambiguous mathematically definable ‘true name’. I see this as a suggestion for a way of structuring a society, so I’m looking for criticisms like “that guiding principle would likely produce these effects on a society using it, which feels undesirable to me because…”
I don’t think the thermometer is suffering.
I think it’s not necessarily easy to know when something is suffering from the outside, but I still think it’s the best standard.
I possibly should have clarified I’m moreso talking about the standard for moral consideration, I think if we ever created an AI entity capable of making art that also has the capacity for qualia states, I don’t think copyright rights will be relevant anymore.
We shouldn’t be doing this.
This isn’t true for the vast majority of industrial agriculture. In practice there are virtually no restraints for the treatment of most animals.
Why Darwinian evolution? Because it’s hard to know if it’s suffering otherwise?
I think rights should be based on capacity for intelligence in certain circumstances where it’s relevant. I don’t think a pig should be able to vote in an election, because it wouldn’t be able to comprehend that, but it should have the right not to be tortured and exploited.
I’m proposing a society in which living things, or sufficiently detailed emulations of them, and especially sapient ones, have preferred moral and legal status. I’m reasonably confident that for something complex and mobile with senses, Darwinian evolution will generally produce mechanisms that act like pain and suffering, for pretty obvious reasons. So I’m proposing a definition of ‘suffering’ rooted in evolutionary theory, and only applicable to living things, or emulations/systems sufficintly closely derived from them. If you emulate such a system, I’m proposing that we worry about its suffering to the extent that it’s a sufficiently detailed emulation still functioning in its naturally-evolved design. For example I’m suggesting that a current-scale LLM doing next-token generation of the pleadings of a torture victim not be counted as suffering for legal/moral purposes: IMO the inner emulation of a human it’s running isn’t (pretty clearly based on parameter count to, say, synapse count) a sufficiently close simulation of a biological organism that we should consider it’s behavior as ‘suffering’: for example, no simulations of pain centers are included. Increase the accuracy of simulation sufficiently, and there comes a point (details TBD by a society where this matters) where that ceases to be true.
So, if someone wants a particular policy enacted, and uses sufficient computational resources to simulate 10^12 separate and distinct sapient kittens-girls who have all been edited so that they will suffer greatly if this policy isn’t enacted, we shouldn’t encourage that sort of moral blackmail or ballot-stuffing. I don’t think they should be able to win the vote or utilitarian decision-making balance just by custom-making a lot of new voters/citizens: it’s a clear instability in anything resembling a democracy or that uses utilitarian ethics. I might even go so far as to suggest that that Darwinian evolution cannot have happened ‘in silico’, or at least that if it did it must be a very accurate simulation of a real physical environment that hasn’t been tweaked to produce some convenient outcome. So even if they expend the computational resources to in-silico evolve 10^12 separate and distinct sapient kitten-girls who will otherwise suffer greatly, that’s still moral blackmail. If you want to stuff the electorate with supporters, I think you should have to do it the old-fashioned way, by physically breeding and raising them — mostly because this is expensive enough to be impractical.