Having said that though, morality does say that if you have the means to give someone an opportunity to increase their happiness at no cost to you or anyone else, you should give it to them, though this can also be viewed as something that would generate harm if they found out that you didn’t offer it to them.
What you say is true only if the person is part of our group, and it so because we instinctively know that increasing the survival probability of our group increases ours too. Unless we use complete randomness to make a move, we can’t make a completely free move. Even Mother Teresa didn’t make free moves, she would help others only in exchange of god’s love. The only moment we really care for others’ feelings is when they yell at us because we harm them, or when they thanks us because we got them out of trouble, thus when we are close enough to communicate, but even what we do then is selfish: we get away from people that yell at us and get closer to those who thank us, thus breaking or building automatically a group in our favor. I’m pretty sure that what we do is always selfish, and I think that you are trying to design a perfectly free AGI, what I find impossible to do if the designer itself is selfish. Do you by chance think that we are not really selfish?
There are many people who are unselfish, and some who go so far that they end up worse off than the strangers they help. You can argue that they do this because that’s what makes them feel best about their lives, and that is probably true, which means even the most extreme altruism can be seen as selfish. We see many people who want to help the world’s poor get up to the same level as the rich, while others don’t give a damn and would be happy for them all to go on starving, so if both types are being selfish, that’s not a useful word to use to categorise them. It’s better to go by whether they play fair by others. The altruists may be being overly fair, while good people are fair and bad ones are unfair, and what determines whether they’re being fair or not is morality. AGI won’t be selfish (if it’s the kind with no sentience), but it won’t be free either in that its behaviour is dictated by rules. If those rules are correctly made, AGI will be fair.
The most extreme altruism can be seen as selfish, but inversely, the most extreme selfishness can also be seen as altruist: it depends on the viewpoint. We may think that Trump is selfish while closing the door to migrants for instance, but he doesn’t think so because this way, he is being altruist to the republicans, which is a bit selfish since he needs them to be reelected, but he doesn’t feel selfish himself. Selfishness is not about sentience since we can’t feel selfish, it is about defending what we are made of, or part of. Humanity holds together because we are all selfish, and because selfishness implies that the group will help us if we need it. Humanity itself is selfish when it wants to protect the environment, because it is for itself that it does so. The only way to feel guilty of having been selfish is after having weakened somebody from our own group, because then, we know we also weaken ourselves. With no punishment in view from our own group, no guiltiness can be felt, and no guiltiness can be felt either if the punishment comes from another group. That’s why torturers say that they don’t feel guilty.
I have a metaphor for your kind of morality: it’s like windows. It’s going to work when everything will have been taken into account, otherwise it’s going to freeze all the time like the first windows. The problem is that it might hurt people while freezing, but the risk might still be worthwhile. Like any other invention, the way to minimize the risk would be to proceed by small steps. I’m still curious about the possibility to build a selfish AGI though. I still think it could work. There would be some risks too, but they might not be more dangerous than with your’s. Have you tried to imagine what kind of programming would be needed? Such an AGI should behave like a good dictator: to avoid revolutions, he wouldn’t kill people just because they don’t think like him, he would look for a solution where everybody likes him. But how would he proceed exactly?
The main reason why politicians have opted for democracy is selfishness: they knew their turn would come if the other parties would respect the rule, and they new it was better for the country they were part of, so better for them too. But an AGI can’t leave the power to humans if he thinks it won’t work, so what if the system had two AGIs for instance, one with a tendency to try new things and one with a tendency not to change things, so that the people could vote for the one they want? It wouldn’t be exactly like democracy since there wouldn’t be any competition between the AGIs, but there could be parties for people to adhere and play the power game. I don’t like power games, but they seem to be necessary to create groups, and without groups, I’m not sure society would work.
Hi everybody!
Hi David! I’m citing you answering Dagon:
What you say is true only if the person is part of our group, and it so because we instinctively know that increasing the survival probability of our group increases ours too. Unless we use complete randomness to make a move, we can’t make a completely free move. Even Mother Teresa didn’t make free moves, she would help others only in exchange of god’s love. The only moment we really care for others’ feelings is when they yell at us because we harm them, or when they thanks us because we got them out of trouble, thus when we are close enough to communicate, but even what we do then is selfish: we get away from people that yell at us and get closer to those who thank us, thus breaking or building automatically a group in our favor. I’m pretty sure that what we do is always selfish, and I think that you are trying to design a perfectly free AGI, what I find impossible to do if the designer itself is selfish. Do you by chance think that we are not really selfish?
Hi Raymond,
There are many people who are unselfish, and some who go so far that they end up worse off than the strangers they help. You can argue that they do this because that’s what makes them feel best about their lives, and that is probably true, which means even the most extreme altruism can be seen as selfish. We see many people who want to help the world’s poor get up to the same level as the rich, while others don’t give a damn and would be happy for them all to go on starving, so if both types are being selfish, that’s not a useful word to use to categorise them. It’s better to go by whether they play fair by others. The altruists may be being overly fair, while good people are fair and bad ones are unfair, and what determines whether they’re being fair or not is morality. AGI won’t be selfish (if it’s the kind with no sentience), but it won’t be free either in that its behaviour is dictated by rules. If those rules are correctly made, AGI will be fair.
The most extreme altruism can be seen as selfish, but inversely, the most extreme selfishness can also be seen as altruist: it depends on the viewpoint. We may think that Trump is selfish while closing the door to migrants for instance, but he doesn’t think so because this way, he is being altruist to the republicans, which is a bit selfish since he needs them to be reelected, but he doesn’t feel selfish himself. Selfishness is not about sentience since we can’t feel selfish, it is about defending what we are made of, or part of. Humanity holds together because we are all selfish, and because selfishness implies that the group will help us if we need it. Humanity itself is selfish when it wants to protect the environment, because it is for itself that it does so. The only way to feel guilty of having been selfish is after having weakened somebody from our own group, because then, we know we also weaken ourselves. With no punishment in view from our own group, no guiltiness can be felt, and no guiltiness can be felt either if the punishment comes from another group. That’s why torturers say that they don’t feel guilty.
I have a metaphor for your kind of morality: it’s like windows. It’s going to work when everything will have been taken into account, otherwise it’s going to freeze all the time like the first windows. The problem is that it might hurt people while freezing, but the risk might still be worthwhile. Like any other invention, the way to minimize the risk would be to proceed by small steps. I’m still curious about the possibility to build a selfish AGI though. I still think it could work. There would be some risks too, but they might not be more dangerous than with your’s. Have you tried to imagine what kind of programming would be needed? Such an AGI should behave like a good dictator: to avoid revolutions, he wouldn’t kill people just because they don’t think like him, he would look for a solution where everybody likes him. But how would he proceed exactly?
The main reason why politicians have opted for democracy is selfishness: they knew their turn would come if the other parties would respect the rule, and they new it was better for the country they were part of, so better for them too. But an AGI can’t leave the power to humans if he thinks it won’t work, so what if the system had two AGIs for instance, one with a tendency to try new things and one with a tendency not to change things, so that the people could vote for the one they want? It wouldn’t be exactly like democracy since there wouldn’t be any competition between the AGIs, but there could be parties for people to adhere and play the power game. I don’t like power games, but they seem to be necessary to create groups, and without groups, I’m not sure society would work.