I’ve read the Arbital post several times now to make sure I’ve got the point, and most of the complexity which it refers to is what my solution covers with its database of knowledge of sentience. The problem for AGI is exactly the same as it would be for us if we went to an alien world and discovered an intelligent species like our own which asked us to help resolve the conflicts raging on their planet (having heard from us that we managed to do this on our own planet). But these aliens are unlike us in many ways—different things please or anger them, and we need to collect a lot of knowledge about this so that we can make accurate moral judgements in working out the rights and wrongs of all their many conflicts. We are now just like AGI, starting with an empty database. Well, we may find that some of the contents of our database about human likes and dislikes helps in places, but some parts might be so wrong that we must be very careful not to jump to incorrect assumptions. Crucially though, just like AGI, we do have a simple principle to apply to sort out all the moral poblems on this alien world. The complexities are merely details to store in the database, but the algorithm for crunching the data is the exact same one used for working out morality for humans—it remains a matter of weighing up harm, and it’s only the weightings that are different.
Of course, the weightings should also change for every individual according to their own personal likes and dislikes—just as we have difficulty understanding the aliens, we have difficulty understanding other humans, and we can even have difficulty understanding ourselves. When we’re making moral decisions about people we don’t know, we have to go by averages and hope that it fits, but any information that we have about the individuals in question will help us improve our calculations. If a starving person has an intolerance to a particular kind of food and we’re taking emergency supplies to their village, we’ll try to make sure we don’t run out of everything except that problem food item before we get to that individual, but we can only get that right if we know to do so. The complexities are huge, but in every case we can still do the correct thing based on the information that is available to us, and we’re always running the same, simple morality algorithm. The complexity that is blinding everyone to what morality is is not located in the algorithm. The algorithm is simple and universal.
I’ve read the Arbital post several times now to make sure I’ve got the point, and most of the complexity which it refers to is what my solution covers with its database of knowledge of sentience. The problem for AGI is exactly the same as it would be for us if we went to an alien world and discovered an intelligent species like our own which asked us to help resolve the conflicts raging on their planet (having heard from us that we managed to do this on our own planet). But these aliens are unlike us in many ways—different things please or anger them, and we need to collect a lot of knowledge about this so that we can make accurate moral judgements in working out the rights and wrongs of all their many conflicts. We are now just like AGI, starting with an empty database. Well, we may find that some of the contents of our database about human likes and dislikes helps in places, but some parts might be so wrong that we must be very careful not to jump to incorrect assumptions. Crucially though, just like AGI, we do have a simple principle to apply to sort out all the moral poblems on this alien world. The complexities are merely details to store in the database, but the algorithm for crunching the data is the exact same one used for working out morality for humans—it remains a matter of weighing up harm, and it’s only the weightings that are different.
Of course, the weightings should also change for every individual according to their own personal likes and dislikes—just as we have difficulty understanding the aliens, we have difficulty understanding other humans, and we can even have difficulty understanding ourselves. When we’re making moral decisions about people we don’t know, we have to go by averages and hope that it fits, but any information that we have about the individuals in question will help us improve our calculations. If a starving person has an intolerance to a particular kind of food and we’re taking emergency supplies to their village, we’ll try to make sure we don’t run out of everything except that problem food item before we get to that individual, but we can only get that right if we know to do so. The complexities are huge, but in every case we can still do the correct thing based on the information that is available to us, and we’re always running the same, simple morality algorithm. The complexity that is blinding everyone to what morality is is not located in the algorithm. The algorithm is simple and universal.