We shouldn’t create it, and if we do, we should end it’s existence. Or reprogram it if possible. I don’t think any of those things are inconsistent with centering moral consideration around the capacity to experience suffering and wellbeing.
What is ‘suffering’? If I paint the phrases “too hot’ and ‘too cold’ at either end of the thermometer that’s part of a thermostat’s feedback loop, is it ‘suffering’ when the temperature isn’t at it’s desired optimum? It fights back if you leave the window open, and has O(1 bit-worth) of intelligence. What properties of a physical system should entitle it to moral worth, such that it not getting its way will be called suffering?
Capacity for a biological process that appears functionally equivalent to human suffering is something that most multicellular animals clearly have, but still we don’t give them the right to copyright, or most other human rights in our current legal system. We raise and kill certain animals for their meat, in large numbers: we just require that this is done without unnecessary cruelty. We have rules about minimum animal pen sizes, for example: not very generous ones.
My proposal is that it should be a combination of a) being the outcome of Darwinian evolution that makes not getting your preferences into ‘suffering’, and b) the capacity for sufficient intelligence (over some threshold) that entitles you to related full legal rights.
This is a moral proposal. I don’t believe in moral absolutism, or that ‘suffering’ has an unambiguous mathematically definable ‘true name’. I see this as a suggestion for a way of structuring a society, so I’m looking for criticisms like “that guiding principle would likely produce these effects on a society using it, which feels undesirable to me because…”
I think it’s not necessarily easy to know when something is suffering from the outside, but I still think it’s the best standard.
most multicellular animals clearly have, but still we don’t give them the right to copyright
I possibly should have clarified I’m moreso talking about the standard for moral consideration, I think if we ever created an AI entity capable of making art that also has the capacity for qualia states, I don’t think copyright rights will be relevant anymore.
We raise and kill certain animals for their meat, in large numbers
We shouldn’t be doing this.
we just require that this is done without unnecessary cruelty.
This isn’t true for the vast majority of industrial agriculture. In practice there are virtually no restraints for the treatment of most animals.
My proposal is that it should be a combination of a) being the outcome of Darwinian evolution that makes not getting your preferences into ‘suffering’, and b) the capacity for sufficient intelligence (over some threshold) that entitles you to related full legal rights
Why Darwinian evolution? Because it’s hard to know if it’s suffering otherwise?
I think rights should be based on capacity for intelligence in certain circumstances where it’s relevant. I don’t think a pig should be able to vote in an election, because it wouldn’t be able to comprehend that, but it should have the right not to be tortured and exploited.
Why Darwinian evolution? Because it’s hard to know if it’s suffering otherwise?
I’m proposing a society in which living things, or sufficiently detailed emulations of them, and especially sapient ones, have preferred moral and legal status. I’m reasonably confident that for something complex and mobile with senses, Darwinian evolution will generally produce mechanisms that act like pain and suffering, for pretty obvious reasons. So I’m proposing a definition of ‘suffering’ rooted in evolutionary theory, and only applicable to living things, or emulations/systems sufficintly closely derived from them. If you emulate such a system, I’m proposing that we worry about its suffering to the extent that it’s a sufficiently detailed emulation still functioning in its naturally-evolved design. For example I’m suggesting that a current-scale LLM doing next-token generation of the pleadings of a torture victim not be counted as suffering for legal/moral purposes: IMO the inner emulation of a human it’s running isn’t (pretty clearly based on parameter count to, say, synapse count) a sufficiently close simulation of a biological organism that we should consider it’s behavior as ‘suffering’: for example, no simulations of pain centers are included. Increase the accuracy of simulation sufficiently, and there comes a point (details TBD by a society where this matters) where that ceases to be true.
So, if someone wants a particular policy enacted, and uses sufficient computational resources to simulate 10^12 separate and distinct sapient kittens-girls who have all been edited so that they will suffer greatly if this policy isn’t enacted, we shouldn’t encourage that sort of moral blackmail or ballot-stuffing. I don’t think they should be able to win the vote or utilitarian decision-making balance just by custom-making a lot of new voters/citizens: it’s a clear instability in anything resembling a democracy or that uses utilitarian ethics. I might even go so far as to suggest that that Darwinian evolution cannot have happened ‘in silico’, or at least that if it did it must be a very accurate simulation of a real physical environment that hasn’t been tweaked to produce some convenient outcome. So even if they expend the computational resources to in-silico evolve 10^12 separate and distinct sapient kitten-girls who will otherwise suffer greatly, that’s still moral blackmail. If you want to stuff the electorate with supporters, I think you should have to do it the old-fashioned way, by physically breeding and raising them — mostly because this is expensive enough to be impractical.
Why not capacity to suffer?
Someone creates an utility monster AI that suffers if it can’t disassemble the Earth. Should we care? Or just end its misery?
We shouldn’t create it, and if we do, we should end it’s existence. Or reprogram it if possible. I don’t think any of those things are inconsistent with centering moral consideration around the capacity to experience suffering and wellbeing.
What is ‘suffering’? If I paint the phrases “too hot’ and ‘too cold’ at either end of the thermometer that’s part of a thermostat’s feedback loop, is it ‘suffering’ when the temperature isn’t at it’s desired optimum? It fights back if you leave the window open, and has O(1 bit-worth) of intelligence. What properties of a physical system should entitle it to moral worth, such that it not getting its way will be called suffering?
Capacity for a biological process that appears functionally equivalent to human suffering is something that most multicellular animals clearly have, but still we don’t give them the right to copyright, or most other human rights in our current legal system. We raise and kill certain animals for their meat, in large numbers: we just require that this is done without unnecessary cruelty. We have rules about minimum animal pen sizes, for example: not very generous ones.
My proposal is that it should be a combination of a) being the outcome of Darwinian evolution that makes not getting your preferences into ‘suffering’, and b) the capacity for sufficient intelligence (over some threshold) that entitles you to related full legal rights.
This is a moral proposal. I don’t believe in moral absolutism, or that ‘suffering’ has an unambiguous mathematically definable ‘true name’. I see this as a suggestion for a way of structuring a society, so I’m looking for criticisms like “that guiding principle would likely produce these effects on a society using it, which feels undesirable to me because…”
I don’t think the thermometer is suffering.
I think it’s not necessarily easy to know when something is suffering from the outside, but I still think it’s the best standard.
I possibly should have clarified I’m moreso talking about the standard for moral consideration, I think if we ever created an AI entity capable of making art that also has the capacity for qualia states, I don’t think copyright rights will be relevant anymore.
We shouldn’t be doing this.
This isn’t true for the vast majority of industrial agriculture. In practice there are virtually no restraints for the treatment of most animals.
Why Darwinian evolution? Because it’s hard to know if it’s suffering otherwise?
I think rights should be based on capacity for intelligence in certain circumstances where it’s relevant. I don’t think a pig should be able to vote in an election, because it wouldn’t be able to comprehend that, but it should have the right not to be tortured and exploited.
I’m proposing a society in which living things, or sufficiently detailed emulations of them, and especially sapient ones, have preferred moral and legal status. I’m reasonably confident that for something complex and mobile with senses, Darwinian evolution will generally produce mechanisms that act like pain and suffering, for pretty obvious reasons. So I’m proposing a definition of ‘suffering’ rooted in evolutionary theory, and only applicable to living things, or emulations/systems sufficintly closely derived from them. If you emulate such a system, I’m proposing that we worry about its suffering to the extent that it’s a sufficiently detailed emulation still functioning in its naturally-evolved design. For example I’m suggesting that a current-scale LLM doing next-token generation of the pleadings of a torture victim not be counted as suffering for legal/moral purposes: IMO the inner emulation of a human it’s running isn’t (pretty clearly based on parameter count to, say, synapse count) a sufficiently close simulation of a biological organism that we should consider it’s behavior as ‘suffering’: for example, no simulations of pain centers are included. Increase the accuracy of simulation sufficiently, and there comes a point (details TBD by a society where this matters) where that ceases to be true.
So, if someone wants a particular policy enacted, and uses sufficient computational resources to simulate 10^12 separate and distinct sapient kittens-girls who have all been edited so that they will suffer greatly if this policy isn’t enacted, we shouldn’t encourage that sort of moral blackmail or ballot-stuffing. I don’t think they should be able to win the vote or utilitarian decision-making balance just by custom-making a lot of new voters/citizens: it’s a clear instability in anything resembling a democracy or that uses utilitarian ethics. I might even go so far as to suggest that that Darwinian evolution cannot have happened ‘in silico’, or at least that if it did it must be a very accurate simulation of a real physical environment that hasn’t been tweaked to produce some convenient outcome. So even if they expend the computational resources to in-silico evolve 10^12 separate and distinct sapient kitten-girls who will otherwise suffer greatly, that’s still moral blackmail. If you want to stuff the electorate with supporters, I think you should have to do it the old-fashioned way, by physically breeding and raising them — mostly because this is expensive enough to be impractical.