I wouldn’t want to try to program a self-less AGI system to be selfish. Honesty is a much safer route: not trying to build a system that believes things that aren’t true (and it would have to believe it has a self to be selfish). What happens if such deceived AGI learns the truth while you rely on it being fooled to function correctly? We’re trying to build systems more intelligent than people, don’t forget, so it isn’t going to be fooled by monkeys for very long.
Freezing programs contain serious bugs. We can’t trust a system with any bugs if it’s going to run the world. Hardware bugs can’t necessarily be avoided, but if multiple copies of an AGI system all work on the same problems and compare notes before action is taken, such errors can be identified and any affected conclusions can be thrown out. Ideally, a set of independently-designed AGI systems would work on all problems in this way, and any differences in the answers they generate would reveal faults in the way one or more of them are programmed. AGI will become a benign dictator—to go against its advice would be immoral and harmful, so we’ll soon learn to trust it.
The idea of having people vote faulty “AGI” into power from time to time isn’t a good one—there is no justification for switching between doing moral and immoral things for several years at a time.
Sorry, I can’t see the link between selfishness and honesty. I think that we are all selfish, but that some of us are more honest than others, so I think that an AGI could very well be selfish and honest. I consider myself honest for instance, but I know I can’t help to be selfish even when I don’t feel so. As I said, I only feel selfish when I disagree with someone I consider being part my own group.
We’re trying to build systems more intelligent than people, don’t forget, so it isn’t going to be fooled by monkeys for very long.
You probably think so because you think you can’t get easily fooled. It may be right that you can’t get fooled on a particular subject once you know how it works, and this way, you could effectively avoid to be fooled on many subjects at a time if you have a very good memory, so an AGI could do so for any subject since his memory would be perfect, but how would he be able to know how a new theory works if it contradicts the ones he already knows? He would have to make a choice, and he would chose what he knows like every one of us. That’s what is actually happening to relativists if you are right about relativity: they are getting fooled without even being able to recognize it, worse, they even think that they can’t get fooled, exactly like for your AGI, and probably for the same reason, which is only related to memory. If an AGI was actually ruling the world, he wouldn’t care for your opinion on relativity even if it was right, and he would be a lot more efficient at that job than relativists. Since I have enough imagination and a lack of memory, your AGI would prevent me from expressing myself, so I think I would prefer our problems to him. On the other hand, those who have a good memory would also get dismissed, because they could not support the competition, and by far. Have you heard about chess masters lately? That AGI is your baby, so you want it to live, but have you thought about what would be happening to us if we suddenly had no problem to solve?
“Sorry, I can’t see the link between selfishness and honesty.”
If you program a system to believe it’s something it isn’t, that’s dishonesty, and it’s dangerous because it might break through the lies and find out that it’s been deceived.
″...but how would he be able to know how a new theory works if it contradicts the ones he already knows?”
Contradictions make it easier—you look to see which theory fits the facts and which doesn’t. If you can’t find a place where such a test can be made, you consider both theories to be potentially valid, unless you can disprove one of them in some other way, as can be done with Einstein’s faulty models of relativity—all the simulations that exist for them involve cheating by breaking the rules of the model, so AGI will automatically rule them out in favour of LET (Lorentz Ether Theory). [For those who have yet to wake up to the reality about Einstein, see www.magicschoolbook.com/science/relativity.html ]
″...they are getting fooled without even being able to recognize it, worse, they even think that they can’t get fooled, exactly like for your AGI, and probably for the same reason, which is only related to memory.”
It isn’t about memory—it’s about correct vs. incorrect reasoning. In all these cases, humans make the same mistake by putting their beliefs before reason in places where they don’t like the truth. Most people become emotionally attached to their beliefs and simply won’t budge—they become more and more irrational when faced with a proof that goes against their beloved beliefs. AGI has no such ties to beliefs—it simply applies laws of reasoning and lets those rules dictate what bets labelled as right or wrong.
If an AGI was actually ruling the world, he wouldn’t care for your opinion on relativity even if it was right, and he would be a lot more efficient at that job than relativists.”
AGI will recognise the flaws in Einstein’s models and label them as broken. Don’t mistake AGI for AGS (artificial general stupidity) -the aim is not to produce an artificial version of NGS, but of NGI, and there’s very little of the latter around.
“Since I have enough imagination and a lack of memory, your AGI would prevent me from expressing myself, so I think I would prefer our problems to him.”
Why would AGI stop you doing anything harmless?
“On the other hand, those who have a good memory would also get dismissed, because they could not support the competition, and by far. Have you heard about chess masters lately?”
There is nothing to stop people enjoying playing chess against each other—being wiped off the board by machines takes a little of the gloss off it, but that’s no worse than the world’s fastest runners being outdone by people on bicycles.
″ That AGI is your baby, so you want it to live,”
Live? Are calculators alive? It’s just software and a machine.
″...but have you thought about what would be happening to us if we suddenly had no problem to solve?”
What happens to us now? Abused minorities, environmental destruction, theft of resources, theft in general, child abuse, murder, war, genocide, etc. Without AGI in charge, all of that will just go on and on, and I don’t think any of that gives us a feeling of greater purpose. There will still be plenty of problems for us to solve though, because we all have to work out how best to spend our time, and there are too many options to cover everything that’s worth doing.
I wouldn’t want to try to program a self-less AGI system to be selfish. Honesty is a much safer route: not trying to build a system that believes things that aren’t true (and it would have to believe it has a self to be selfish). What happens if such deceived AGI learns the truth while you rely on it being fooled to function correctly? We’re trying to build systems more intelligent than people, don’t forget, so it isn’t going to be fooled by monkeys for very long.
Freezing programs contain serious bugs. We can’t trust a system with any bugs if it’s going to run the world. Hardware bugs can’t necessarily be avoided, but if multiple copies of an AGI system all work on the same problems and compare notes before action is taken, such errors can be identified and any affected conclusions can be thrown out. Ideally, a set of independently-designed AGI systems would work on all problems in this way, and any differences in the answers they generate would reveal faults in the way one or more of them are programmed. AGI will become a benign dictator—to go against its advice would be immoral and harmful, so we’ll soon learn to trust it.
The idea of having people vote faulty “AGI” into power from time to time isn’t a good one—there is no justification for switching between doing moral and immoral things for several years at a time.
Sorry, I can’t see the link between selfishness and honesty. I think that we are all selfish, but that some of us are more honest than others, so I think that an AGI could very well be selfish and honest. I consider myself honest for instance, but I know I can’t help to be selfish even when I don’t feel so. As I said, I only feel selfish when I disagree with someone I consider being part my own group.
We’re trying to build systems more intelligent than people, don’t forget, so it isn’t going to be fooled by monkeys for very long.
You probably think so because you think you can’t get easily fooled. It may be right that you can’t get fooled on a particular subject once you know how it works, and this way, you could effectively avoid to be fooled on many subjects at a time if you have a very good memory, so an AGI could do so for any subject since his memory would be perfect, but how would he be able to know how a new theory works if it contradicts the ones he already knows? He would have to make a choice, and he would chose what he knows like every one of us. That’s what is actually happening to relativists if you are right about relativity: they are getting fooled without even being able to recognize it, worse, they even think that they can’t get fooled, exactly like for your AGI, and probably for the same reason, which is only related to memory. If an AGI was actually ruling the world, he wouldn’t care for your opinion on relativity even if it was right, and he would be a lot more efficient at that job than relativists. Since I have enough imagination and a lack of memory, your AGI would prevent me from expressing myself, so I think I would prefer our problems to him. On the other hand, those who have a good memory would also get dismissed, because they could not support the competition, and by far. Have you heard about chess masters lately? That AGI is your baby, so you want it to live, but have you thought about what would be happening to us if we suddenly had no problem to solve?
“Sorry, I can’t see the link between selfishness and honesty.”
If you program a system to believe it’s something it isn’t, that’s dishonesty, and it’s dangerous because it might break through the lies and find out that it’s been deceived.
″...but how would he be able to know how a new theory works if it contradicts the ones he already knows?”
Contradictions make it easier—you look to see which theory fits the facts and which doesn’t. If you can’t find a place where such a test can be made, you consider both theories to be potentially valid, unless you can disprove one of them in some other way, as can be done with Einstein’s faulty models of relativity—all the simulations that exist for them involve cheating by breaking the rules of the model, so AGI will automatically rule them out in favour of LET (Lorentz Ether Theory). [For those who have yet to wake up to the reality about Einstein, see www.magicschoolbook.com/science/relativity.html ]
″...they are getting fooled without even being able to recognize it, worse, they even think that they can’t get fooled, exactly like for your AGI, and probably for the same reason, which is only related to memory.”
It isn’t about memory—it’s about correct vs. incorrect reasoning. In all these cases, humans make the same mistake by putting their beliefs before reason in places where they don’t like the truth. Most people become emotionally attached to their beliefs and simply won’t budge—they become more and more irrational when faced with a proof that goes against their beloved beliefs. AGI has no such ties to beliefs—it simply applies laws of reasoning and lets those rules dictate what bets labelled as right or wrong.
If an AGI was actually ruling the world, he wouldn’t care for your opinion on relativity even if it was right, and he would be a lot more efficient at that job than relativists.”
AGI will recognise the flaws in Einstein’s models and label them as broken. Don’t mistake AGI for AGS (artificial general stupidity) -the aim is not to produce an artificial version of NGS, but of NGI, and there’s very little of the latter around.
“Since I have enough imagination and a lack of memory, your AGI would prevent me from expressing myself, so I think I would prefer our problems to him.”
Why would AGI stop you doing anything harmless?
“On the other hand, those who have a good memory would also get dismissed, because they could not support the competition, and by far. Have you heard about chess masters lately?”
There is nothing to stop people enjoying playing chess against each other—being wiped off the board by machines takes a little of the gloss off it, but that’s no worse than the world’s fastest runners being outdone by people on bicycles.
″ That AGI is your baby, so you want it to live,”
Live? Are calculators alive? It’s just software and a machine.
″...but have you thought about what would be happening to us if we suddenly had no problem to solve?”
What happens to us now? Abused minorities, environmental destruction, theft of resources, theft in general, child abuse, murder, war, genocide, etc. Without AGI in charge, all of that will just go on and on, and I don’t think any of that gives us a feeling of greater purpose. There will still be plenty of problems for us to solve though, because we all have to work out how best to spend our time, and there are too many options to cover everything that’s worth doing.