Thanks. I was actually trying to post the above as a personal blog post initially while trying to find out how the site works, but I think I misunderstood how the buttons at the bottom of the page function. It appears in the Frontpage list where I wasn’t expecting it to go—I had hoped that if anyone wanted to promote it to Frontpage, they’d discuss it with me first and that I’d have a chance to edit it into proper shape. I have read a lot of articles elsewhere about machine ethics but have yet to find anything that spells out what morality is in the way that I think I have, but if there’s something here that does the job better, I want to find it, so I will certainly follow your pointers. What I’ve seen from other people building AGI has alarmed me because their ideas about machine ethics appear to be way off, so what I’m looking for is somewhere (anywhere) where practical solutions are being discussed seriously for systems that may be nearer to completion than is generally believed.
I’ve read the Arbital post several times now to make sure I’ve got the point, and most of the complexity which it refers to is what my solution covers with its database of knowledge of sentience. The problem for AGI is exactly the same as it would be for us if we went to an alien world and discovered an intelligent species like our own which asked us to help resolve the conflicts raging on their planet (having heard from us that we managed to do this on our own planet). But these aliens are unlike us in many ways—different things please or anger them, and we need to collect a lot of knowledge about this so that we can make accurate moral judgements in working out the rights and wrongs of all their many conflicts. We are now just like AGI, starting with an empty database. Well, we may find that some of the contents of our database about human likes and dislikes helps in places, but some parts might be so wrong that we must be very careful not to jump to incorrect assumptions. Crucially though, just like AGI, we do have a simple principle to apply to sort out all the moral poblems on this alien world. The complexities are merely details to store in the database, but the algorithm for crunching the data is the exact same one used for working out morality for humans—it remains a matter of weighing up harm, and it’s only the weightings that are different.
Of course, the weightings should also change for every individual according to their own personal likes and dislikes—just as we have difficulty understanding the aliens, we have difficulty understanding other humans, and we can even have difficulty understanding ourselves. When we’re making moral decisions about people we don’t know, we have to go by averages and hope that it fits, but any information that we have about the individuals in question will help us improve our calculations. If a starving person has an intolerance to a particular kind of food and we’re taking emergency supplies to their village, we’ll try to make sure we don’t run out of everything except that problem food item before we get to that individual, but we can only get that right if we know to do so. The complexities are huge, but in every case we can still do the correct thing based on the information that is available to us, and we’re always running the same, simple morality algorithm. The complexity that is blinding everyone to what morality is is not located in the algorithm. The algorithm is simple and universal.
Thanks. I was actually trying to post the above as a personal blog post initially while trying to find out how the site works, but I think I misunderstood how the buttons at the bottom of the page function. It appears in the Frontpage list where I wasn’t expecting it to go—I had hoped that if anyone wanted to promote it to Frontpage, they’d discuss it with me first and that I’d have a chance to edit it into proper shape. I have read a lot of articles elsewhere about machine ethics but have yet to find anything that spells out what morality is in the way that I think I have, but if there’s something here that does the job better, I want to find it, so I will certainly follow your pointers. What I’ve seen from other people building AGI has alarmed me because their ideas about machine ethics appear to be way off, so what I’m looking for is somewhere (anywhere) where practical solutions are being discussed seriously for systems that may be nearer to completion than is generally believed.
Oh, I am sorry about the UI being confusing! I will move the post back to your personal blog.
I’ve read the Arbital post several times now to make sure I’ve got the point, and most of the complexity which it refers to is what my solution covers with its database of knowledge of sentience. The problem for AGI is exactly the same as it would be for us if we went to an alien world and discovered an intelligent species like our own which asked us to help resolve the conflicts raging on their planet (having heard from us that we managed to do this on our own planet). But these aliens are unlike us in many ways—different things please or anger them, and we need to collect a lot of knowledge about this so that we can make accurate moral judgements in working out the rights and wrongs of all their many conflicts. We are now just like AGI, starting with an empty database. Well, we may find that some of the contents of our database about human likes and dislikes helps in places, but some parts might be so wrong that we must be very careful not to jump to incorrect assumptions. Crucially though, just like AGI, we do have a simple principle to apply to sort out all the moral poblems on this alien world. The complexities are merely details to store in the database, but the algorithm for crunching the data is the exact same one used for working out morality for humans—it remains a matter of weighing up harm, and it’s only the weightings that are different.
Of course, the weightings should also change for every individual according to their own personal likes and dislikes—just as we have difficulty understanding the aliens, we have difficulty understanding other humans, and we can even have difficulty understanding ourselves. When we’re making moral decisions about people we don’t know, we have to go by averages and hope that it fits, but any information that we have about the individuals in question will help us improve our calculations. If a starving person has an intolerance to a particular kind of food and we’re taking emergency supplies to their village, we’ll try to make sure we don’t run out of everything except that problem food item before we get to that individual, but we can only get that right if we know to do so. The complexities are huge, but in every case we can still do the correct thing based on the information that is available to us, and we’re always running the same, simple morality algorithm. The complexity that is blinding everyone to what morality is is not located in the algorithm. The algorithm is simple and universal.