I have never sidestepped anything. The right answers are the ones dictated by the weighing up of harm based on the available information (which includes the harm ratings in the database of knowledge of sentience). If the harm from one choice has a higher weight than another choice, that other choice is more moral.
How do you know that? Why should anyone care about this definition? These are questions which you have definitely sidestepped.
And yet one of the answers is actually right, while the other isn’t.
Is 2+2 equal to 5 or to fish?
Which one of us will AGI judge to have the better argument for this? This kind of dispute will be settled by AGI’s intelligence quite independently of any morality rules that it might end up running. The best arguments will always win out, and I’m confident that I’ll be the one winning this argument when we have unbiased AGI weighing things up.
What is this “unbiased AGI” who makes moral judgments on the basis of intelligence alone? This is nonsense—moral “truths” are not the same as physical or logical truths. They are fundamentally subjective, similar to definitions. You cannot have an “unbiased morality” because there is no objective moral reality to test claims against.
If you can show me an alternative morality which isn’t flawed and which produces different answers from mine when crunching the exact same data, one of them will be wrong, and that would provide a clear point at which close examination would lead to one of those systems being rejected.
You should really read up on the Orthogonality Thesis and related concepts. Also, how do you plan on distinguishing between right and wrong moralities?
“How do you know that? Why should anyone care about this definition? These are questions which you have definitely sidestepped.”
People should care about it because it always works. If anyone wants to take issue with that, all they have to do is show a situation where it fails. All examples confirm that it works.
“Is 2+2 equal to 5 or to fish?”
Neither of those results works, but neither of them is my answer.
“What is this “unbiased AGI” who makes moral judgments on the basis of intelligence alone? This is nonsense—moral “truths” are not the same as physical or logical truths. They are fundamentally subjective, similar to definitions. You cannot have an “unbiased morality” because there is no objective moral reality to test claims against.”
You’ve taken that out of context—I made no claim about it making moral judgements on the basis of intelligence alone. That bit about using intelligence alone was referring to a specific argument that doesn’t relate directly to morality.
“You should really read up on the Orthogonality Thesis and related concepts.”
Thanks - all such pointers are welcome.
“Also, how do you plan on distinguishing between right and wrong moralities?”
First by recognising what morality is for. If there was no suffering, there would be no need for morality as it would be impossible to harm anyone. In a world of non-sentient robots, they can do what they like to each other without it being wrong as no harm is ever done. Once you’ve understood that and get the idea of what morality is about (i.e. harm management), then you have to think about how harm management should be applied. The sentient things that morality protects are prepared to accept being harmed if it’s a necessary part of accessing pleasure where that pleasure will likely outweigh the harm, but they don’t like being harmed in ways that don’t improve their access to pleasure. They don’t like being harmed by each other for insufficient gains. They use their intelligence to work out that some things are fair and some things aren’t, and what determines fairness is whether the harm they suffer is likely to lead to overall gains for them or not. In the more complex cases, one individual can suffer in order for another individual to gain enough to make that suffering worthwhile, but only if the system shares out the suffering such that they all take turns in being the ones who suffer and the ones who gain. They recognise that if the same individual always suffers while others always gain, that isn’t fair, and they know it isn’t fair simply by imagining it happening that way to them. The rules of morality come out of this process of rational thinking about harm management—it isn’t some magic thing that we can’t understand. To maximise fairness, that suffering which opens the way to pleasure should be shared out as equally as possible, and so should access to the pleasures. The method of imagining that you are all of the individuals and seeking a means of distribution of suffering and pleasure that will satisfy you as all of them would automatically provide the right answers if full information was available. Because full information isn’t available, all we can do is calculate the distribution that’s most likely to be fair on that same basis using the information that is actually available. With incorrect moralities, some individuals are harmed for other’s gains without proper redistribution to share the harm and pleasure around evenly. It’s just maths.
People should care about it because it always works. If anyone wants to take issue with that, all they have to do is show a situation where it fails. All examples confirm that it works.
What do you mean, it works? I agree that it matches our existing preconceptions and intuitions about morality better than the average random moral system, but I don’t think that that comparison is a useful way of getting to truth and meaningful categories.
Neither of those results works, but neither of them is my answer.
I’ll stop presenting you with poorly-carried-out Zen koans and be direct. You have constructed a false dilemma. It is quite possible for both of you to be wrong.
You’ve taken that out of context—I made no claim about it making moral judgements on the basis of intelligence alone. That bit about using intelligence alone was referring to a specific argument that doesn’t relate directly to morality.
“All sentiences are equally important” is definitely a moral statement.
First by recognising what morality is for. If there was no suffering, there would be no need for morality as it would be impossible to harm anyone. In a world of non-sentient robots, they can do what they like to each other without it being wrong as no harm is ever done. Once you’ve understood that and get the idea of what morality is about (i.e. harm management), then you have to think about how harm management should be applied. The sentient things that morality protects are prepared to accept being harmed if it’s a necessary part of accessing pleasure where that pleasure will likely outweigh the harm, but they don’t like being harmed in ways that don’t improve their access to pleasure. They don’t like being harmed by each other for insufficient gains. They use their intelligence to work out that some things are fair and some things aren’t, and what determines fairness is whether the harm they suffer is likely to lead to overall gains for them or not. In the more complex cases, one individual can suffer in order for another individual to gain enough to make that suffering worthwhile, but only if the system shares out the suffering such that they all take turns in being the ones who suffer and the ones who gain. They recognise that if the same individual always suffers while others always gain, that isn’t fair, and they know it isn’t fair simply by imagining it happening that way to them. The rules of morality come out of this process of rational thinking about harm management—it isn’t some magic thing that we can’t understand. To maximise fairness, that suffering which opens the way to pleasure should be shared out as equally as possible, and so should access to the pleasures. The method of imagining that you are all of the individuals and seeking a means of distribution of suffering and pleasure that will satisfy you as all of them would automatically provide the right answers if full information was available. Because full information isn’t available, all we can do is calculate the distribution that’s most likely to be fair on that same basis using the information that is actually available.
I think that this is a fine (read: “quite good”; an archaic meaning) definition of morality-in-practice, but there are a few issues with your meta-ethics and surrounding parts. First, it is not trivial to define what beings are sentient and what counts as suffering (and how much). Second, if your morality flows entirely from logic, then all of the disagreement or possibility for being incorrect is inside “you did the logic incorrectly,” and I’m not sure that your method of testing moral theories takes that possibility into account.
With incorrect moralities, some individuals are harmed for other’s gains without proper redistribution to share the harm and pleasure around evenly. It’s just maths.
I agree that it is mathematics, but where is this “proper” coming from? Could somebody disagree about whether, say, it is moral to harm somebody as retributive justice? Then the equations need our value system as input, and the results are no longer entirely objective. I agree that “what maximizes X?” is objective, though.
“What do you mean, it works? I agree that it matches our existing preconceptions and intuitions about morality better than the average random moral system, but I don’t think that that comparison is a useful way of getting to truth and meaningful categories.”
It works beautifully. People have claimed it’s wrong, but they can’t point to any evidence for that. We urgently need a system for governing how AGI calculates morality, and I’ve proposed a way of doing so. I came here to see what your best system is, but you don’t appear to have made any selection at all—there is no league table of best proposed solutions, and there are no league tables for each entry listing the worst problems with them. I’ve waded through a lot of stuff and have found that the biggest objection to utilitarianism is a false paradox. Why should you be taken seriously at all when you’ve failed to find that out for yourselves?
“You have constructed a false dilemma. It is quite possible for both of you to be wrong.”
If you trace this back to the argument in question, it’s about equal amounts of suffering being equally bad for sentiences in different species. If they are equal amounts, they are necessarily equally bad—if they weren’t, they wouldn’t have equal values.
″ “You’ve taken that out of context—I made no claim about it making moral judgements on the basis of intelligence alone. That bit about using intelligence alone was referring to a specific argument that doesn’t relate directly to morality.” --> ” “All sentiences are equally important” is definitely a moral statement.”
Again you’re trawling up something that my statement about using intelligence alone for was not referring to.
“First, it is not trivial to define what beings are sentient and what counts as suffering (and how much).”
That doesn’t matter—we can still aim to do the job as well as it can be done based on the knowledge that is available, and the odds are that that will be better than not attempting to do so.
″ Second, if your morality flows entirely from logic, then all of the disagreement or possibility for being incorrect is inside “you did the logic incorrectly,” and I’m not sure that your method of testing moral theories takes that possibility into account.”
It will be possible with AGI to have it run multiple models of morality and to show up the differences between them and to prove that it is doing the logic correctly. At that point, it will be easier to reveal the real faults rather than imaginary ones. But it would be better if we could prime AGI with the best candidate first, before it has the opportunity to start offering advice to powerful people.
“I agree that it is mathematics, but where is this “proper” coming from?”
Proper simply means correct—fair share where everyone gets the same amount of reward for the same amount of suffering.
“Could somebody disagree about whether, say, it is moral to harm somebody as retributive justice? Then the equations need our value system as input, and the results are no longer entirely objective.”
Retributive justice is inherently a bad idea because there’s no such thing as free will—bad people are not to blame for being the way they are. However, there is a need to deter others( and to discourage repeat behaviour by the same individual if they’re ever to be released into the wild again), so plenty of harm will typically be on the agenda anyway if the calculation is that this will reduce harm.
How do you know that? Why should anyone care about this definition? These are questions which you have definitely sidestepped.
Is 2+2 equal to 5 or to fish?
What is this “unbiased AGI” who makes moral judgments on the basis of intelligence alone? This is nonsense—moral “truths” are not the same as physical or logical truths. They are fundamentally subjective, similar to definitions. You cannot have an “unbiased morality” because there is no objective moral reality to test claims against.
You should really read up on the Orthogonality Thesis and related concepts. Also, how do you plan on distinguishing between right and wrong moralities?
“How do you know that? Why should anyone care about this definition? These are questions which you have definitely sidestepped.”
People should care about it because it always works. If anyone wants to take issue with that, all they have to do is show a situation where it fails. All examples confirm that it works.
“Is 2+2 equal to 5 or to fish?”
Neither of those results works, but neither of them is my answer.
“What is this “unbiased AGI” who makes moral judgments on the basis of intelligence alone? This is nonsense—moral “truths” are not the same as physical or logical truths. They are fundamentally subjective, similar to definitions. You cannot have an “unbiased morality” because there is no objective moral reality to test claims against.”
You’ve taken that out of context—I made no claim about it making moral judgements on the basis of intelligence alone. That bit about using intelligence alone was referring to a specific argument that doesn’t relate directly to morality.
“You should really read up on the Orthogonality Thesis and related concepts.”
Thanks - all such pointers are welcome.
“Also, how do you plan on distinguishing between right and wrong moralities?”
First by recognising what morality is for. If there was no suffering, there would be no need for morality as it would be impossible to harm anyone. In a world of non-sentient robots, they can do what they like to each other without it being wrong as no harm is ever done. Once you’ve understood that and get the idea of what morality is about (i.e. harm management), then you have to think about how harm management should be applied. The sentient things that morality protects are prepared to accept being harmed if it’s a necessary part of accessing pleasure where that pleasure will likely outweigh the harm, but they don’t like being harmed in ways that don’t improve their access to pleasure. They don’t like being harmed by each other for insufficient gains. They use their intelligence to work out that some things are fair and some things aren’t, and what determines fairness is whether the harm they suffer is likely to lead to overall gains for them or not. In the more complex cases, one individual can suffer in order for another individual to gain enough to make that suffering worthwhile, but only if the system shares out the suffering such that they all take turns in being the ones who suffer and the ones who gain. They recognise that if the same individual always suffers while others always gain, that isn’t fair, and they know it isn’t fair simply by imagining it happening that way to them. The rules of morality come out of this process of rational thinking about harm management—it isn’t some magic thing that we can’t understand. To maximise fairness, that suffering which opens the way to pleasure should be shared out as equally as possible, and so should access to the pleasures. The method of imagining that you are all of the individuals and seeking a means of distribution of suffering and pleasure that will satisfy you as all of them would automatically provide the right answers if full information was available. Because full information isn’t available, all we can do is calculate the distribution that’s most likely to be fair on that same basis using the information that is actually available. With incorrect moralities, some individuals are harmed for other’s gains without proper redistribution to share the harm and pleasure around evenly. It’s just maths.
What do you mean, it works? I agree that it matches our existing preconceptions and intuitions about morality better than the average random moral system, but I don’t think that that comparison is a useful way of getting to truth and meaningful categories.
I’ll stop presenting you with poorly-carried-out Zen koans and be direct. You have constructed a false dilemma. It is quite possible for both of you to be wrong.
“All sentiences are equally important” is definitely a moral statement.
I think that this is a fine (read: “quite good”; an archaic meaning) definition of morality-in-practice, but there are a few issues with your meta-ethics and surrounding parts. First, it is not trivial to define what beings are sentient and what counts as suffering (and how much). Second, if your morality flows entirely from logic, then all of the disagreement or possibility for being incorrect is inside “you did the logic incorrectly,” and I’m not sure that your method of testing moral theories takes that possibility into account.
I agree that it is mathematics, but where is this “proper” coming from? Could somebody disagree about whether, say, it is moral to harm somebody as retributive justice? Then the equations need our value system as input, and the results are no longer entirely objective. I agree that “what maximizes X?” is objective, though.
“What do you mean, it works? I agree that it matches our existing preconceptions and intuitions about morality better than the average random moral system, but I don’t think that that comparison is a useful way of getting to truth and meaningful categories.”
It works beautifully. People have claimed it’s wrong, but they can’t point to any evidence for that. We urgently need a system for governing how AGI calculates morality, and I’ve proposed a way of doing so. I came here to see what your best system is, but you don’t appear to have made any selection at all—there is no league table of best proposed solutions, and there are no league tables for each entry listing the worst problems with them. I’ve waded through a lot of stuff and have found that the biggest objection to utilitarianism is a false paradox. Why should you be taken seriously at all when you’ve failed to find that out for yourselves?
“You have constructed a false dilemma. It is quite possible for both of you to be wrong.”
If you trace this back to the argument in question, it’s about equal amounts of suffering being equally bad for sentiences in different species. If they are equal amounts, they are necessarily equally bad—if they weren’t, they wouldn’t have equal values.
″ “You’ve taken that out of context—I made no claim about it making moral judgements on the basis of intelligence alone. That bit about using intelligence alone was referring to a specific argument that doesn’t relate directly to morality.” --> ” “All sentiences are equally important” is definitely a moral statement.”
Again you’re trawling up something that my statement about using intelligence alone for was not referring to.
“First, it is not trivial to define what beings are sentient and what counts as suffering (and how much).”
That doesn’t matter—we can still aim to do the job as well as it can be done based on the knowledge that is available, and the odds are that that will be better than not attempting to do so.
″ Second, if your morality flows entirely from logic, then all of the disagreement or possibility for being incorrect is inside “you did the logic incorrectly,” and I’m not sure that your method of testing moral theories takes that possibility into account.”
It will be possible with AGI to have it run multiple models of morality and to show up the differences between them and to prove that it is doing the logic correctly. At that point, it will be easier to reveal the real faults rather than imaginary ones. But it would be better if we could prime AGI with the best candidate first, before it has the opportunity to start offering advice to powerful people.
“I agree that it is mathematics, but where is this “proper” coming from?”
Proper simply means correct—fair share where everyone gets the same amount of reward for the same amount of suffering.
“Could somebody disagree about whether, say, it is moral to harm somebody as retributive justice? Then the equations need our value system as input, and the results are no longer entirely objective.”
Retributive justice is inherently a bad idea because there’s no such thing as free will—bad people are not to blame for being the way they are. However, there is a need to deter others( and to discourage repeat behaviour by the same individual if they’re ever to be released into the wild again), so plenty of harm will typically be on the agenda anyway if the calculation is that this will reduce harm.