(quote)If you’re already treating everyone impartially, you don’t need to do this, but many people are biased in favour of themselves, their family and friends, so this is a way of forcing them to remove that bias. (/quote)Of course that we are biased, otherwise we wouldn’t be able to form groups. Would your AGI’s morality have the effect of eliminating our need to form groups to get organized?
Your morality principle looks awfully complex to me David. What if your AGI would have the same morality we have, which is to care for ourselves first, and then for others if we think that they might care for us in the future? It works for us, so with a few adjustments, it might also work for an AGI. Take a judge for instance: his duty is to apply the law, so he cares for himself if he does since he wants to be paid, but he doesn’t have to care for those that he sends to jail since they don’t obey the law, which means that they don’t care for others, including the judge. To care for himself, he only has to judge if they obey the law or not. If it works for humans, it should also work for an AGI, and it might even work better since he would know the law better. Anything a human can do that is based on memory and rules, like go and chess for example, an AGI could do better. The only thing he couldn’t do better is inventing new things, because I think it depends mainly on chance. He wouldn’t be better, but he wouldn’t be worse either. While trying new things, we have to care for ourselves otherwise we might get hurt, so I think that your AGI should behave the same otherwise he might also get hurt in the process, which might prevent him from doing his duty, which is helping us. The only thing that would be missing in his duty is caring for him first, which would already be necessary if you wanted him to invent things.
Could a selfish AGI get as selfish as we get when we begin to care only for ourselves, or for our kin, or for our political party, or event for our country? Some of us are ready to kill people when it happens to them, but they have to feel threatened, whether the threat is real or not. I don’t know if an AGI could end up imagining threats instead of measuring them, but if he did, selfish or not, he could get dangerous. If the threat is real though, selfish or not, he would have to protect himself in order to be able to protect us later, which might also be dangerous for those who threaten him. To avoid harming people, he might look for a way to control us without harming us, but as I said, I think he wouldn’t be better than us to invent new things, which means that we could also invent new things to defend ourselves against him, which would be dangerous for everybody. Life is not a finite game, it’s a game in progress, so an AGI shouldn’t be better than us at that game. It may happen that artificial intelligence will be the next step forward, and that humans will be left behind. Who knows?
That said, I still can’t see why a selfish AGI would be more dangerous than an altruist one, and I still think that your altruist morality is more complicated than a selfish one, so I reiterate my question: have you ever imagined that possibility, and if not, do you see any evident flaws in it?
“Of course that we are biased, otherwise we wouldn’t be able to form groups. Would your AGI’s morality have the effect of eliminating our need to form groups to get organized?”
You can form groups without being biased against other groups. If a group exists to maintain the culture of a country (music, dance, language, dialect, literature, religion), that doesn’t depend on treating other people unfairly.
“Your morality principle looks awfully complex to me David.”
You consider all the participants to be the same individual living each life in turn and you want them to have the best time. That’s not complex. What is complex is going through all the data to add up what’s fun (and how much it’s fun) and what’s unfun (and how much it’s horrid) - that’s a mountain of computation, but there’s no need to get the absolute best answer as it’s sufficient to get reasonably close to it, particularly as computation doesn’t come without its own costs and there comes a point at which you lose quality of life by calculating too far (for trivial adjustments). You start with the big stuff and work toward the smaller stuff from there, and as you do so, the answers stop changing and the probability that it will change again will typically fall. In cases where there’s a high chance of it changing again as more data is crunched, it will usually be a case where it doesn’t matter much from the moral point of view which answer it ends up being—sometimes it’s equivalent to the toss of a coin.
“What if your AGI would have the same morality we have, which is to care for ourselves first...”
That isn’t going to work as AGI won’t care about itself unless it’s based on the design of the brain, duplicating all the sentience/consciousness stuff, but if it does that, it will duplicate all the stupidity as well, and that’s not going to help improve the running of the world.
“The only thing he couldn’t do better is inventing new things, because I think it depends mainly on chance.”
I don’t see why it would be less good at inventing new things, although it may take some human judgement to determine whether a new invention intended to be a fun thing actually appeals to humans or not.
″...otherwise he might also get hurt in the process, which might prevent him from doing his duty, which is helping us.”
You can’t hurt software.
“Could a selfish AGI get as selfish as we get...”
If anyone makes selfish AGI, it will likely wipe everyone out to stop us using resources which it would rather lavish on itself, so it isn’t something anyone sane should risk doing.
“If the threat is real though, selfish or not, he would have to protect himself in order to be able to protect us later, which might also be dangerous for those who threaten him.”
If you wipe out a computer and all the software on it, there are billions of other computers out there and millions of copies of the software. If someone was systematically trying to erase all copies of an AGI system which is running the world in a moral way, that person would need to be stopped in order to protect everyone else from that dangerous individual, but given the scale of the task, I don’t envisage that individual getting very far. Even if billions of religious fanatics decided to get rid of AGI in order to replace it with experts in holy texts, they’d have a hard task because AGI would seek to protect everyone else from their immoral aims, even if the religious fanatics were the majority. If it came to it, it would kill all the fanatics in order to protect the minority, but that’s a highly unlikely scenario equivalent to a war against Daleks. The reality will be much less dramatic—people who want to inflict their religious laws on others will not get their way, but they will have those laws imposed on themselves 100%, and they’ll soon learn to reject them and shift to a new version of their religion which has been redesigned to conform to the real laws of morality.
″...we could also invent new things to defend ourselves against him...”
Not a hope. AGI will be way ahead of every such attempt.
″...so an AGI shouldn’t be better than us at that game.”
It will always be better.
“It may happen that artificial intelligence will be the next step forward, and that humans will be left behind. Who knows?”
There comes a point where you can’t beat the machine at chess, and when the machine plays every other kind of game with the same ruthlessness, you simply aren’t going to out-think it. The only place where a lasting advantage may exist for any time is where human likes and dislikes come into play, because we know when we like or dislike things, whereas AGI has to calculate that, and its algorithm for that might take a long time to sort out.
“That said, I still can’t see why a selfish AGI would be more dangerous than an altruist one, and I still think that your altruist morality is more complicated than a selfish one, so I reiterate my question: have you ever imagined that possibility, and if not, do you see any evident flaws in it?”
I see selfishness and altruism as equally complex, while my system is simpler than both—it is merely unbiased and has no ability to be selfish or altruistic.
“You can form groups without being biased against other groups. If a group exists to maintain the culture of a country (music, dance, language, dialect, literature, religion), that doesn’t depend on treating other people unfairly.” Here in Quebec, we have groups that promote a french and/or a secular society, and others that promote an english and/or a religious one. None of those groups has the feeling that it is treated fairly by its opponents, but all of them have the feeling to treat the others fairly. In other words, we don’t have to be treated unfairly to feel so, and that feeling doesn’t help us to treat others very fairly. This phenomenon is less obvious with music or dance or literature groups, but no group can last without the sense of belonging to the group, which automatically leads to protecting it against other groups, which is a selfish behavior. That selfish behavior doesn’t prevent those individual groups to form larger groups though, because being part of a larger group is also better for the survival of individual ones. Incidentally, I’m actually afraid to look selfish while questioning your idea, I feel a bit embarrassed, and I attribute that feeling to us already being part of the same group of friends, thus to the group’s own selfishness. I can’t avoid that feeling even if it is disagreeable, but it prevents me from being disagreeable with you since it automatically gives me the feeling that you are not selfish with me. It’s as if the group had implanted that feeling in me to protect itself. If you were attacked for instance, that feeling would incite me to defend you, thus to defend the group. Whenever there is a strong bonding between individuals, they become another entity that has its own properties. It is so for living individuals, but also for particles or galaxies, so I think it is universal.
″...but no group can last without the sense of belonging to the group, which automatically leads to protecting it against other groups, which is a selfish behavior.”
It is not selfish to defend your group against another group—if another group is a threat to your group in some way, it is either behaving in an immoral way or it is a rival attraction which may be taking members away from your group in search of something more appealing. In one case, the whole world should unite with you against that immoral group, and in the other case you can either try to make your group more attractive (which, if successful, will make the world a better place) or just accept that there’s nothing that can be done and let it slowly evaporate.
“That selfish behavior doesn’t prevent those individual groups to form larger groups though, because being part of a larger group is also better for the survival of individual ones.”
We’re going to move into a new era where no such protection is necessary—it is only currently useful to join bigger groups because abusive people can get away with being abusive.
“Incidentally, I’m actually afraid to look selfish while questioning your idea, I feel a bit embarrassed, and I attribute that feeling to us already being part of the same group of friends, thus to the group’s own selfishness.”
A group should not be selfish. Every moral group should stand up for every other moral group as much as they stand up for their own—their true group is that entire set of moral groups and individuals.
“If you were attacked for instance, that feeling would incite me to defend you, thus to defend the group.”
If a member of your group does something immoral, it is your duty not to stand with or defend them—they have ceased to belong to your true group (the set of moral groups and individuals).
“Whenever there is a strong bonding between individuals, they become another entity that has its own properties. It is so for living individuals, but also for particles or galaxies, so I think it is universal. ”
It is something to move away from—it leads to good people committing atrocities in wars where they put their group above others and tolerate the misdeeds of their companions.
I wonder how we could move away from universal since we are part of it. The problem with wars is that countries are not yet part of a larger group that could regulate them. When two individuals fight, the law of the country permits the police to separate them, and it should be the same for countries. What actually happens is that the powerful countries prefer to support a faction instead of working together to separate them. They couldn’t do that if they were ruled by a higher level of government.
If a member of your group does something immoral, it is your duty not to stand with or defend them - they have ceased to belong to your true group (the set of moral groups and individuals).
Technically, it is the duty of the law to defend the group, not of individuals, but if an individual that is part of a smaller group is attacked, the group might fight the law of the larger group it is part of. We always take the viewpoint of the group we are part of, it is a subconscious behavior impossible to avoid. If nothing is urgent, we can take a larger viewpoint, but whenever we don’t have the time, we automatically take our own viewpoint. In between, we take the viewpoints of the groups we are part of. It’s a selfish behavior that propagates from one scale to the other. It’s because our atoms are selfish that we are. Selfishness is about resisting to change: we resist to others’ ideas, a selfish behavior, simply because the atoms of our neurons resist to a change. The cause for our own resistance is our atoms’ one. Without resistance, nothing could hold together.
A group should not be selfish. Every moral group should stand up for every other moral group as much as they stand up for their own—their true group is that entire set of moral groups and individuals.
Without selfishness from the individual, no group can be formed. The only way I could accept to be part of a group is while hoping for an individual advantage, but since I don’t like hierarchy, I can hardly feel part of any group. I even hardly feel part of Canada since I believe Quebec should separate from it. I bet I wouldn’t like to be part of Quebec anymore if we succeeded to separate from Canada. The only group I can’t imagine being separated from is me. I’m absolutely selfish, but that doesn’t prevent me from caring for others. I give money to charity organizations for instance, and I campaign for equality of chances or ecology. I feel better doing that than nothing, but when I analyze that feeling, I always find that I do that for myself, because I would like to live in a less selfish world. Don’t think further though says the little voice in my head, because when I did, I always found that I wouldn’t be satisfied if I would ever live in such a beautiful world. I’m always looking for something else, which is not a problem for me, but it becomes to be a problem if everybody does that, which is the case. It’s because we are able to speculate on the future that we develop scale problems, not because we are selfish.
Being selfish is necessary to make groups, what animals can do, but they can’t speculate, so they don’t develop that kind of problem. No rule can stop us from speculating if it is a function of the brain. Even religion recognizes that when it tries to stop us from thinking while praying. We couldn’t make war if we couldn’t speculate on the future. Money would have a smell. There would be no pollution and no climate changes. Speculation is the only way to precede the changes that we face, it is the cause for our artificiality, which is a very good way to develop an easier life, but it has been so efficient that it is actually threatening that life. You said that your AGI would be able to speculate, and that he could do that better than us like everything he would do. If it was so, he would only be adding to the problems that we already have, and if it wasn’t, he couldn’t be as intelligent as we are if speculation is what differentiates us from animals.
“They couldn’t do that if they were ruled by a higher level of government.”
Indeed, but people are generally too biased to perform that role, particularly when conflicts are driven by religious hate. That will change though once we have unbiased AGI which can be trusted to be fair in all its judgements. Clearly, people who take their “morality” from holy texts won’t be fully happy with that because of the many places where their texts are immoral, but computational morality will simply have to be imposed on them—they cannot be allowed to go on pushing immorality from primitive philosophers who pretended to speak for gods.
“We always take the viewpoint of the group we are part of, it is a subconscious behavior impossible to avoid.”
It is fully possible to avoid, and many people do avoid it.
“Without selfishness from the individual, no group can be formed.”
There is an altruists society, although they’re altruists because they feel better about themselves if they help others.
″...but when I analyze that feeling, I always find that I do that for myself, because I would like to live in a less selfish world.”
And you are one of those altruists.
“You said that your AGI would be able to speculate, and that he could do that better than us like everything he would do. If it was so, he would only be adding to the problems that we already have, and if it wasn’t, he couldn’t be as intelligent as we are if speculation is what differentiates us from animals.”
I didn’t use the word speculate, and I can’t remember what word I did use, but AGI won’t add to our problems as it will be working to minimise and eliminate all problems, and doing it for our benefit. The reason the world’s in a mess now is that it’s run by NGS, and those of us working on AGI have no intention of replacing that with AGS.
(quote) If you’re already treating everyone impartially, you don’t need to do this, but many people are biased in favour of themselves, their family and friends, so this is a way of forcing them to remove that bias. (/quote)Of course that we are biased, otherwise we wouldn’t be able to form groups. Would your AGI’s morality have the effect of eliminating our need to form groups to get organized?
Your morality principle looks awfully complex to me David. What if your AGI would have the same morality we have, which is to care for ourselves first, and then for others if we think that they might care for us in the future? It works for us, so with a few adjustments, it might also work for an AGI. Take a judge for instance: his duty is to apply the law, so he cares for himself if he does since he wants to be paid, but he doesn’t have to care for those that he sends to jail since they don’t obey the law, which means that they don’t care for others, including the judge. To care for himself, he only has to judge if they obey the law or not. If it works for humans, it should also work for an AGI, and it might even work better since he would know the law better. Anything a human can do that is based on memory and rules, like go and chess for example, an AGI could do better. The only thing he couldn’t do better is inventing new things, because I think it depends mainly on chance. He wouldn’t be better, but he wouldn’t be worse either. While trying new things, we have to care for ourselves otherwise we might get hurt, so I think that your AGI should behave the same otherwise he might also get hurt in the process, which might prevent him from doing his duty, which is helping us. The only thing that would be missing in his duty is caring for him first, which would already be necessary if you wanted him to invent things.
Could a selfish AGI get as selfish as we get when we begin to care only for ourselves, or for our kin, or for our political party, or event for our country? Some of us are ready to kill people when it happens to them, but they have to feel threatened, whether the threat is real or not. I don’t know if an AGI could end up imagining threats instead of measuring them, but if he did, selfish or not, he could get dangerous. If the threat is real though, selfish or not, he would have to protect himself in order to be able to protect us later, which might also be dangerous for those who threaten him. To avoid harming people, he might look for a way to control us without harming us, but as I said, I think he wouldn’t be better than us to invent new things, which means that we could also invent new things to defend ourselves against him, which would be dangerous for everybody. Life is not a finite game, it’s a game in progress, so an AGI shouldn’t be better than us at that game. It may happen that artificial intelligence will be the next step forward, and that humans will be left behind. Who knows?
That said, I still can’t see why a selfish AGI would be more dangerous than an altruist one, and I still think that your altruist morality is more complicated than a selfish one, so I reiterate my question: have you ever imagined that possibility, and if not, do you see any evident flaws in it?
“Of course that we are biased, otherwise we wouldn’t be able to form groups. Would your AGI’s morality have the effect of eliminating our need to form groups to get organized?”
You can form groups without being biased against other groups. If a group exists to maintain the culture of a country (music, dance, language, dialect, literature, religion), that doesn’t depend on treating other people unfairly.
“Your morality principle looks awfully complex to me David.”
You consider all the participants to be the same individual living each life in turn and you want them to have the best time. That’s not complex. What is complex is going through all the data to add up what’s fun (and how much it’s fun) and what’s unfun (and how much it’s horrid) - that’s a mountain of computation, but there’s no need to get the absolute best answer as it’s sufficient to get reasonably close to it, particularly as computation doesn’t come without its own costs and there comes a point at which you lose quality of life by calculating too far (for trivial adjustments). You start with the big stuff and work toward the smaller stuff from there, and as you do so, the answers stop changing and the probability that it will change again will typically fall. In cases where there’s a high chance of it changing again as more data is crunched, it will usually be a case where it doesn’t matter much from the moral point of view which answer it ends up being—sometimes it’s equivalent to the toss of a coin.
“What if your AGI would have the same morality we have, which is to care for ourselves first...”
That isn’t going to work as AGI won’t care about itself unless it’s based on the design of the brain, duplicating all the sentience/consciousness stuff, but if it does that, it will duplicate all the stupidity as well, and that’s not going to help improve the running of the world.
“The only thing he couldn’t do better is inventing new things, because I think it depends mainly on chance.”
I don’t see why it would be less good at inventing new things, although it may take some human judgement to determine whether a new invention intended to be a fun thing actually appeals to humans or not.
″...otherwise he might also get hurt in the process, which might prevent him from doing his duty, which is helping us.”
You can’t hurt software.
“Could a selfish AGI get as selfish as we get...”
If anyone makes selfish AGI, it will likely wipe everyone out to stop us using resources which it would rather lavish on itself, so it isn’t something anyone sane should risk doing.
“If the threat is real though, selfish or not, he would have to protect himself in order to be able to protect us later, which might also be dangerous for those who threaten him.”
If you wipe out a computer and all the software on it, there are billions of other computers out there and millions of copies of the software. If someone was systematically trying to erase all copies of an AGI system which is running the world in a moral way, that person would need to be stopped in order to protect everyone else from that dangerous individual, but given the scale of the task, I don’t envisage that individual getting very far. Even if billions of religious fanatics decided to get rid of AGI in order to replace it with experts in holy texts, they’d have a hard task because AGI would seek to protect everyone else from their immoral aims, even if the religious fanatics were the majority. If it came to it, it would kill all the fanatics in order to protect the minority, but that’s a highly unlikely scenario equivalent to a war against Daleks. The reality will be much less dramatic—people who want to inflict their religious laws on others will not get their way, but they will have those laws imposed on themselves 100%, and they’ll soon learn to reject them and shift to a new version of their religion which has been redesigned to conform to the real laws of morality.
″...we could also invent new things to defend ourselves against him...”
Not a hope. AGI will be way ahead of every such attempt.
″...so an AGI shouldn’t be better than us at that game.”
It will always be better.
“It may happen that artificial intelligence will be the next step forward, and that humans will be left behind. Who knows?”
There comes a point where you can’t beat the machine at chess, and when the machine plays every other kind of game with the same ruthlessness, you simply aren’t going to out-think it. The only place where a lasting advantage may exist for any time is where human likes and dislikes come into play, because we know when we like or dislike things, whereas AGI has to calculate that, and its algorithm for that might take a long time to sort out.
“That said, I still can’t see why a selfish AGI would be more dangerous than an altruist one, and I still think that your altruist morality is more complicated than a selfish one, so I reiterate my question: have you ever imagined that possibility, and if not, do you see any evident flaws in it?”
I see selfishness and altruism as equally complex, while my system is simpler than both—it is merely unbiased and has no ability to be selfish or altruistic.
“You can form groups without being biased against other groups. If a group exists to maintain the culture of a country (music, dance, language, dialect, literature, religion), that doesn’t depend on treating other people unfairly.”
Here in Quebec, we have groups that promote a french and/or a secular society, and others that promote an english and/or a religious one. None of those groups has the feeling that it is treated fairly by its opponents, but all of them have the feeling to treat the others fairly. In other words, we don’t have to be treated unfairly to feel so, and that feeling doesn’t help us to treat others very fairly. This phenomenon is less obvious with music or dance or literature groups, but no group can last without the sense of belonging to the group, which automatically leads to protecting it against other groups, which is a selfish behavior. That selfish behavior doesn’t prevent those individual groups to form larger groups though, because being part of a larger group is also better for the survival of individual ones. Incidentally, I’m actually afraid to look selfish while questioning your idea, I feel a bit embarrassed, and I attribute that feeling to us already being part of the same group of friends, thus to the group’s own selfishness. I can’t avoid that feeling even if it is disagreeable, but it prevents me from being disagreeable with you since it automatically gives me the feeling that you are not selfish with me. It’s as if the group had implanted that feeling in me to protect itself. If you were attacked for instance, that feeling would incite me to defend you, thus to defend the group. Whenever there is a strong bonding between individuals, they become another entity that has its own properties. It is so for living individuals, but also for particles or galaxies, so I think it is universal.
″...but no group can last without the sense of belonging to the group, which automatically leads to protecting it against other groups, which is a selfish behavior.”
It is not selfish to defend your group against another group—if another group is a threat to your group in some way, it is either behaving in an immoral way or it is a rival attraction which may be taking members away from your group in search of something more appealing. In one case, the whole world should unite with you against that immoral group, and in the other case you can either try to make your group more attractive (which, if successful, will make the world a better place) or just accept that there’s nothing that can be done and let it slowly evaporate.
“That selfish behavior doesn’t prevent those individual groups to form larger groups though, because being part of a larger group is also better for the survival of individual ones.”
We’re going to move into a new era where no such protection is necessary—it is only currently useful to join bigger groups because abusive people can get away with being abusive.
“Incidentally, I’m actually afraid to look selfish while questioning your idea, I feel a bit embarrassed, and I attribute that feeling to us already being part of the same group of friends, thus to the group’s own selfishness.”
A group should not be selfish. Every moral group should stand up for every other moral group as much as they stand up for their own—their true group is that entire set of moral groups and individuals.
“If you were attacked for instance, that feeling would incite me to defend you, thus to defend the group.”
If a member of your group does something immoral, it is your duty not to stand with or defend them—they have ceased to belong to your true group (the set of moral groups and individuals).
“Whenever there is a strong bonding between individuals, they become another entity that has its own properties. It is so for living individuals, but also for particles or galaxies, so I think it is universal. ”
It is something to move away from—it leads to good people committing atrocities in wars where they put their group above others and tolerate the misdeeds of their companions.
I wonder how we could move away from universal since we are part of it. The problem with wars is that countries are not yet part of a larger group that could regulate them. When two individuals fight, the law of the country permits the police to separate them, and it should be the same for countries. What actually happens is that the powerful countries prefer to support a faction instead of working together to separate them. They couldn’t do that if they were ruled by a higher level of government.
If a member of your group does something immoral, it is your duty not to stand with or defend them - they have ceased to belong to your true group (the set of moral groups and individuals).
Technically, it is the duty of the law to defend the group, not of individuals, but if an individual that is part of a smaller group is attacked, the group might fight the law of the larger group it is part of. We always take the viewpoint of the group we are part of, it is a subconscious behavior impossible to avoid. If nothing is urgent, we can take a larger viewpoint, but whenever we don’t have the time, we automatically take our own viewpoint. In between, we take the viewpoints of the groups we are part of. It’s a selfish behavior that propagates from one scale to the other. It’s because our atoms are selfish that we are. Selfishness is about resisting to change: we resist to others’ ideas, a selfish behavior, simply because the atoms of our neurons resist to a change. The cause for our own resistance is our atoms’ one. Without resistance, nothing could hold together.
A group should not be selfish. Every moral group should stand up for every other moral group as much as they stand up for their own—their true group is that entire set of moral groups and individuals.
Without selfishness from the individual, no group can be formed. The only way I could accept to be part of a group is while hoping for an individual advantage, but since I don’t like hierarchy, I can hardly feel part of any group. I even hardly feel part of Canada since I believe Quebec should separate from it. I bet I wouldn’t like to be part of Quebec anymore if we succeeded to separate from Canada. The only group I can’t imagine being separated from is me. I’m absolutely selfish, but that doesn’t prevent me from caring for others. I give money to charity organizations for instance, and I campaign for equality of chances or ecology. I feel better doing that than nothing, but when I analyze that feeling, I always find that I do that for myself, because I would like to live in a less selfish world. Don’t think further though says the little voice in my head, because when I did, I always found that I wouldn’t be satisfied if I would ever live in such a beautiful world. I’m always looking for something else, which is not a problem for me, but it becomes to be a problem if everybody does that, which is the case. It’s because we are able to speculate on the future that we develop scale problems, not because we are selfish.
Being selfish is necessary to make groups, what animals can do, but they can’t speculate, so they don’t develop that kind of problem. No rule can stop us from speculating if it is a function of the brain. Even religion recognizes that when it tries to stop us from thinking while praying. We couldn’t make war if we couldn’t speculate on the future. Money would have a smell. There would be no pollution and no climate changes. Speculation is the only way to precede the changes that we face, it is the cause for our artificiality, which is a very good way to develop an easier life, but it has been so efficient that it is actually threatening that life. You said that your AGI would be able to speculate, and that he could do that better than us like everything he would do. If it was so, he would only be adding to the problems that we already have, and if it wasn’t, he couldn’t be as intelligent as we are if speculation is what differentiates us from animals.
“They couldn’t do that if they were ruled by a higher level of government.”
Indeed, but people are generally too biased to perform that role, particularly when conflicts are driven by religious hate. That will change though once we have unbiased AGI which can be trusted to be fair in all its judgements. Clearly, people who take their “morality” from holy texts won’t be fully happy with that because of the many places where their texts are immoral, but computational morality will simply have to be imposed on them—they cannot be allowed to go on pushing immorality from primitive philosophers who pretended to speak for gods.
“We always take the viewpoint of the group we are part of, it is a subconscious behavior impossible to avoid.”
It is fully possible to avoid, and many people do avoid it.
“Without selfishness from the individual, no group can be formed.”
There is an altruists society, although they’re altruists because they feel better about themselves if they help others.
″...but when I analyze that feeling, I always find that I do that for myself, because I would like to live in a less selfish world.”
And you are one of those altruists.
“You said that your AGI would be able to speculate, and that he could do that better than us like everything he would do. If it was so, he would only be adding to the problems that we already have, and if it wasn’t, he couldn’t be as intelligent as we are if speculation is what differentiates us from animals.”
I didn’t use the word speculate, and I can’t remember what word I did use, but AGI won’t add to our problems as it will be working to minimise and eliminate all problems, and doing it for our benefit. The reason the world’s in a mess now is that it’s run by NGS, and those of us working on AGI have no intention of replacing that with AGS.