“You have an opinion, he has another opinion. Neither of you has a proof.”
If suffering is real, it provides a need for the management of suffering, and that is morality. To deny that is to assert that suffering doesn’t matter and that, by extension, torture on innocent people is not wrong.
The kind of management required is minimisation (attempted elimination) of harm, though not any component of harm that unlocks the way to enjoyment that cancels out that harm. If minimising harm doesn’t matter, there is nothing wrong with torturing innocent people. If enjoyment doesn’t cancel out some suffering, no one would consider their life to be worth living.
All of this is reasoned and correct.
The remaining issue is how the management should be done to measure pleasure against suffering for different players, and what I’ve found is a whole lot of different approaches attempting to do the same thing, some by naive methods that fail in a multitude of situations, and others which appear to do well in most or all situations if they’re applied correctly (by weighing up all the harm and pleasure involved instead of ignoring some of it).
It looks as if my method for computing morality produces the same results as utilitarianism, and it likely does the job well enough to govern safe AGI. Because we’re going to be up against people who will be releasing bad (biased) AGI, we will be forced to go ahead with installing our AGI into devices and setting them loose fairly soon after we have achieved full AGI. For this reason, it would be useful if there was a serious place where the issues could be discussed now so that we can systematically home in on the best system of moral governance and throw out all the junk, but I still don’t see it happening anywhere (and it certainly isn’t happening here). We need a dynamic league table of proposed solutions, each with its own league table of objections to it so that we can focus on the urgent task of identifying the junk and reducing the clutter down to something clear. It is likely that AGI will do this job itself, but it would be better if humans could get their first using the power of their own wits. Time is short.
My own attempt to do this job has led to me identifying three systems which appear to work better than the rest, all producing the same results in most situations, but with one producing slightly different results in cases where the number of players in a scenario is variable and where the variation depends on whether they exist or not—where the results differ, it looks as if we have a range or answers that are all moral. That is something I need to explore and test further, but I no longer expect to get any help with this from other humans because they’re simply not awake. “I can tear your proposed method to pieces and show that it’s wrong,” they promise, and that gets my interest because it’s exactly what I’m looking for—sharp, analytical minds that can cut through to the errors and show them up. But no—they completely fail to deliver. Instead, I find that they are the guardians of a mountain of garbage with a few gems hidden in it which they can’t sort into two piles: junk and jewels. “Utilitarianism is a pile of pants!” they say, because of the Mere Addition Paradox. I resolve that “paradox” for them, and what happens: denial of mathematics and lots of down-voting of my comments and up-votes for the irrational ones. Sadly, that disqualifies this site from serious discussion—it’s clear that if any other intelligence has visited here before me, it didn’t hang around. I will follow its lead and look elsewhere.
“You have an opinion, he has another opinion. Neither of you has a proof.”
If suffering is real, it provides a need for the management of suffering, and that is morality. To deny that is to assert that suffering doesn’t matter and that, by extension, torture on innocent people is not wrong.
The kind of management required is minimisation (attempted elimination) of harm, though not any component of harm that unlocks the way to enjoyment that cancels out that harm. If minimising harm doesn’t matter, there is nothing wrong with torturing innocent people. If enjoyment doesn’t cancel out some suffering, no one would consider their life to be worth living.
All of this is reasoned and correct.
The remaining issue is how the management should be done to measure pleasure against suffering for different players, and what I’ve found is a whole lot of different approaches attempting to do the same thing, some by naive methods that fail in a multitude of situations, and others which appear to do well in most or all situations if they’re applied correctly (by weighing up all the harm and pleasure involved instead of ignoring some of it).
It looks as if my method for computing morality produces the same results as utilitarianism, and it likely does the job well enough to govern safe AGI. Because we’re going to be up against people who will be releasing bad (biased) AGI, we will be forced to go ahead with installing our AGI into devices and setting them loose fairly soon after we have achieved full AGI. For this reason, it would be useful if there was a serious place where the issues could be discussed now so that we can systematically home in on the best system of moral governance and throw out all the junk, but I still don’t see it happening anywhere (and it certainly isn’t happening here). We need a dynamic league table of proposed solutions, each with its own league table of objections to it so that we can focus on the urgent task of identifying the junk and reducing the clutter down to something clear. It is likely that AGI will do this job itself, but it would be better if humans could get their first using the power of their own wits. Time is short.
My own attempt to do this job has led to me identifying three systems which appear to work better than the rest, all producing the same results in most situations, but with one producing slightly different results in cases where the number of players in a scenario is variable and where the variation depends on whether they exist or not—where the results differ, it looks as if we have a range or answers that are all moral. That is something I need to explore and test further, but I no longer expect to get any help with this from other humans because they’re simply not awake. “I can tear your proposed method to pieces and show that it’s wrong,” they promise, and that gets my interest because it’s exactly what I’m looking for—sharp, analytical minds that can cut through to the errors and show them up. But no—they completely fail to deliver. Instead, I find that they are the guardians of a mountain of garbage with a few gems hidden in it which they can’t sort into two piles: junk and jewels. “Utilitarianism is a pile of pants!” they say, because of the Mere Addition Paradox. I resolve that “paradox” for them, and what happens: denial of mathematics and lots of down-voting of my comments and up-votes for the irrational ones. Sadly, that disqualifies this site from serious discussion—it’s clear that if any other intelligence has visited here before me, it didn’t hang around. I will follow its lead and look elsewhere.