“What do you mean, it works? I agree that it matches our existing preconceptions and intuitions about morality better than the average random moral system, but I don’t think that that comparison is a useful way of getting to truth and meaningful categories.”
It works beautifully. People have claimed it’s wrong, but they can’t point to any evidence for that. We urgently need a system for governing how AGI calculates morality, and I’ve proposed a way of doing so. I came here to see what your best system is, but you don’t appear to have made any selection at all—there is no league table of best proposed solutions, and there are no league tables for each entry listing the worst problems with them. I’ve waded through a lot of stuff and have found that the biggest objection to utilitarianism is a false paradox. Why should you be taken seriously at all when you’ve failed to find that out for yourselves?
“You have constructed a false dilemma. It is quite possible for both of you to be wrong.”
If you trace this back to the argument in question, it’s about equal amounts of suffering being equally bad for sentiences in different species. If they are equal amounts, they are necessarily equally bad—if they weren’t, they wouldn’t have equal values.
″ “You’ve taken that out of context—I made no claim about it making moral judgements on the basis of intelligence alone. That bit about using intelligence alone was referring to a specific argument that doesn’t relate directly to morality.” --> ” “All sentiences are equally important” is definitely a moral statement.”
Again you’re trawling up something that my statement about using intelligence alone for was not referring to.
“First, it is not trivial to define what beings are sentient and what counts as suffering (and how much).”
That doesn’t matter—we can still aim to do the job as well as it can be done based on the knowledge that is available, and the odds are that that will be better than not attempting to do so.
″ Second, if your morality flows entirely from logic, then all of the disagreement or possibility for being incorrect is inside “you did the logic incorrectly,” and I’m not sure that your method of testing moral theories takes that possibility into account.”
It will be possible with AGI to have it run multiple models of morality and to show up the differences between them and to prove that it is doing the logic correctly. At that point, it will be easier to reveal the real faults rather than imaginary ones. But it would be better if we could prime AGI with the best candidate first, before it has the opportunity to start offering advice to powerful people.
“I agree that it is mathematics, but where is this “proper” coming from?”
Proper simply means correct—fair share where everyone gets the same amount of reward for the same amount of suffering.
“Could somebody disagree about whether, say, it is moral to harm somebody as retributive justice? Then the equations need our value system as input, and the results are no longer entirely objective.”
Retributive justice is inherently a bad idea because there’s no such thing as free will—bad people are not to blame for being the way they are. However, there is a need to deter others( and to discourage repeat behaviour by the same individual if they’re ever to be released into the wild again), so plenty of harm will typically be on the agenda anyway if the calculation is that this will reduce harm.
“What do you mean, it works? I agree that it matches our existing preconceptions and intuitions about morality better than the average random moral system, but I don’t think that that comparison is a useful way of getting to truth and meaningful categories.”
It works beautifully. People have claimed it’s wrong, but they can’t point to any evidence for that. We urgently need a system for governing how AGI calculates morality, and I’ve proposed a way of doing so. I came here to see what your best system is, but you don’t appear to have made any selection at all—there is no league table of best proposed solutions, and there are no league tables for each entry listing the worst problems with them. I’ve waded through a lot of stuff and have found that the biggest objection to utilitarianism is a false paradox. Why should you be taken seriously at all when you’ve failed to find that out for yourselves?
“You have constructed a false dilemma. It is quite possible for both of you to be wrong.”
If you trace this back to the argument in question, it’s about equal amounts of suffering being equally bad for sentiences in different species. If they are equal amounts, they are necessarily equally bad—if they weren’t, they wouldn’t have equal values.
″ “You’ve taken that out of context—I made no claim about it making moral judgements on the basis of intelligence alone. That bit about using intelligence alone was referring to a specific argument that doesn’t relate directly to morality.” --> ” “All sentiences are equally important” is definitely a moral statement.”
Again you’re trawling up something that my statement about using intelligence alone for was not referring to.
“First, it is not trivial to define what beings are sentient and what counts as suffering (and how much).”
That doesn’t matter—we can still aim to do the job as well as it can be done based on the knowledge that is available, and the odds are that that will be better than not attempting to do so.
″ Second, if your morality flows entirely from logic, then all of the disagreement or possibility for being incorrect is inside “you did the logic incorrectly,” and I’m not sure that your method of testing moral theories takes that possibility into account.”
It will be possible with AGI to have it run multiple models of morality and to show up the differences between them and to prove that it is doing the logic correctly. At that point, it will be easier to reveal the real faults rather than imaginary ones. But it would be better if we could prime AGI with the best candidate first, before it has the opportunity to start offering advice to powerful people.
“I agree that it is mathematics, but where is this “proper” coming from?”
Proper simply means correct—fair share where everyone gets the same amount of reward for the same amount of suffering.
“Could somebody disagree about whether, say, it is moral to harm somebody as retributive justice? Then the equations need our value system as input, and the results are no longer entirely objective.”
Retributive justice is inherently a bad idea because there’s no such thing as free will—bad people are not to blame for being the way they are. However, there is a need to deter others( and to discourage repeat behaviour by the same individual if they’re ever to be released into the wild again), so plenty of harm will typically be on the agenda anyway if the calculation is that this will reduce harm.