Yes, mitchell porter, of course there is no method (so far) (that we know of) for moral perception or moral action that does not rely on the human mind. But that does not refute my point, which again is as follows: most of the readers of these words seem to believe that the maximization of happiness or pleasure and the minimization of pain is the ultimate good. Now when you combine that belief with egalitarianism, which can be described as the belief that you yourself have no special moral value relative to any other human, and neither do kings or movie stars or Harvard graduates, you get a value system that is often called utilitarianism. Utilitarianism and egalitarianism have become central features of our moral culture over the last 400 years, and have exerted many beneficial effects. To give one brief example, they have done much to eliminate the waste of human potential that came from having a small groupand their descendants own everything. But the scientific and technological environment we now find ourselves in has become challenging enough that if we continue to use utilitarianism and egalitarianism to guide us, we will go badly astray. (I have believed this since 1992 when I read a very good book on the subject.) I consider utilitarianism particularly inadequate in planning for futures in which humans will no longer be the only ethical intelligences. I refer to those futures in which humans will share the planet and the solar system with AGIs.
You mentioned CEV, which is a complex topic, but I will briefly summarize my two main objections. The author of CEV says that one of his intentions is for everyone’s opinion to have weight: he does not wish to disenfranchise anyone. Since most humans care mainly or only about happiness, I worry that that will lead to an intelligence explosion that is mostly or all about maximizing happiness and that that will interfere with my plans, which are to exert a beneficial effect on reality that persists indefinitely but has little to do in the long term with whether the humans were happy or sad. Second, there is much ambiguity in CEV that has to be resolved in the process of putting it into a computer program. In other words, everything that goes into a computer program has to be specified very precisely. The person who currently has the most influence on how the ambiguities will be resolved has a complex and not-easily summarized value system, but utilitarianism and “humanism”, which for the sake of this comment will be defined as the idea that humankind is the measure of all things, obviously figure very prominently.
I will keep checking this thread for replies to my comment.
Yes, mitchell porter, of course there is no method (so far) (that we know of) for moral perception or moral action that does not rely on the human mind. But that does not refute my point, which again is as follows: most of the readers of these words seem to believe that the maximization of happiness or pleasure and the minimization of pain is the ultimate good. Now when you combine that belief with egalitarianism, which can be described as the belief that you yourself have no special moral value relative to any other human, and neither do kings or movie stars or Harvard graduates, you get a value system that is often called utilitarianism. Utilitarianism and egalitarianism have become central features of our moral culture over the last 400 years, and have exerted many beneficial effects. To give one brief example, they have done much to eliminate the waste of human potential that came from having a small groupand their descendants own everything. But the scientific and technological environment we now find ourselves in has become challenging enough that if we continue to use utilitarianism and egalitarianism to guide us, we will go badly astray. (I have believed this since 1992 when I read a very good book on the subject.) I consider utilitarianism particularly inadequate in planning for futures in which humans will no longer be the only ethical intelligences. I refer to those futures in which humans will share the planet and the solar system with AGIs.
You mentioned CEV, which is a complex topic, but I will briefly summarize my two main objections. The author of CEV says that one of his intentions is for everyone’s opinion to have weight: he does not wish to disenfranchise anyone. Since most humans care mainly or only about happiness, I worry that that will lead to an intelligence explosion that is mostly or all about maximizing happiness and that that will interfere with my plans, which are to exert a beneficial effect on reality that persists indefinitely but has little to do in the long term with whether the humans were happy or sad. Second, there is much ambiguity in CEV that has to be resolved in the process of putting it into a computer program. In other words, everything that goes into a computer program has to be specified very precisely. The person who currently has the most influence on how the ambiguities will be resolved has a complex and not-easily summarized value system, but utilitarianism and “humanism”, which for the sake of this comment will be defined as the idea that humankind is the measure of all things, obviously figure very prominently.
I will keep checking this thread for replies to my comment.