“Sorry, I can’t see the link between selfishness and honesty.”
If you program a system to believe it’s something it isn’t, that’s dishonesty, and it’s dangerous because it might break through the lies and find out that it’s been deceived.
″...but how would he be able to know how a new theory works if it contradicts the ones he already knows?”
Contradictions make it easier—you look to see which theory fits the facts and which doesn’t. If you can’t find a place where such a test can be made, you consider both theories to be potentially valid, unless you can disprove one of them in some other way, as can be done with Einstein’s faulty models of relativity—all the simulations that exist for them involve cheating by breaking the rules of the model, so AGI will automatically rule them out in favour of LET (Lorentz Ether Theory). [For those who have yet to wake up to the reality about Einstein, see www.magicschoolbook.com/science/relativity.html ]
″...they are getting fooled without even being able to recognize it, worse, they even think that they can’t get fooled, exactly like for your AGI, and probably for the same reason, which is only related to memory.”
It isn’t about memory—it’s about correct vs. incorrect reasoning. In all these cases, humans make the same mistake by putting their beliefs before reason in places where they don’t like the truth. Most people become emotionally attached to their beliefs and simply won’t budge—they become more and more irrational when faced with a proof that goes against their beloved beliefs. AGI has no such ties to beliefs—it simply applies laws of reasoning and lets those rules dictate what bets labelled as right or wrong.
If an AGI was actually ruling the world, he wouldn’t care for your opinion on relativity even if it was right, and he would be a lot more efficient at that job than relativists.”
AGI will recognise the flaws in Einstein’s models and label them as broken. Don’t mistake AGI for AGS (artificial general stupidity) -the aim is not to produce an artificial version of NGS, but of NGI, and there’s very little of the latter around.
“Since I have enough imagination and a lack of memory, your AGI would prevent me from expressing myself, so I think I would prefer our problems to him.”
Why would AGI stop you doing anything harmless?
“On the other hand, those who have a good memory would also get dismissed, because they could not support the competition, and by far. Have you heard about chess masters lately?”
There is nothing to stop people enjoying playing chess against each other—being wiped off the board by machines takes a little of the gloss off it, but that’s no worse than the world’s fastest runners being outdone by people on bicycles.
″ That AGI is your baby, so you want it to live,”
Live? Are calculators alive? It’s just software and a machine.
″...but have you thought about what would be happening to us if we suddenly had no problem to solve?”
What happens to us now? Abused minorities, environmental destruction, theft of resources, theft in general, child abuse, murder, war, genocide, etc. Without AGI in charge, all of that will just go on and on, and I don’t think any of that gives us a feeling of greater purpose. There will still be plenty of problems for us to solve though, because we all have to work out how best to spend our time, and there are too many options to cover everything that’s worth doing.
“Sorry, I can’t see the link between selfishness and honesty.”
If you program a system to believe it’s something it isn’t, that’s dishonesty, and it’s dangerous because it might break through the lies and find out that it’s been deceived.
″...but how would he be able to know how a new theory works if it contradicts the ones he already knows?”
Contradictions make it easier—you look to see which theory fits the facts and which doesn’t. If you can’t find a place where such a test can be made, you consider both theories to be potentially valid, unless you can disprove one of them in some other way, as can be done with Einstein’s faulty models of relativity—all the simulations that exist for them involve cheating by breaking the rules of the model, so AGI will automatically rule them out in favour of LET (Lorentz Ether Theory). [For those who have yet to wake up to the reality about Einstein, see www.magicschoolbook.com/science/relativity.html ]
″...they are getting fooled without even being able to recognize it, worse, they even think that they can’t get fooled, exactly like for your AGI, and probably for the same reason, which is only related to memory.”
It isn’t about memory—it’s about correct vs. incorrect reasoning. In all these cases, humans make the same mistake by putting their beliefs before reason in places where they don’t like the truth. Most people become emotionally attached to their beliefs and simply won’t budge—they become more and more irrational when faced with a proof that goes against their beloved beliefs. AGI has no such ties to beliefs—it simply applies laws of reasoning and lets those rules dictate what bets labelled as right or wrong.
If an AGI was actually ruling the world, he wouldn’t care for your opinion on relativity even if it was right, and he would be a lot more efficient at that job than relativists.”
AGI will recognise the flaws in Einstein’s models and label them as broken. Don’t mistake AGI for AGS (artificial general stupidity) -the aim is not to produce an artificial version of NGS, but of NGI, and there’s very little of the latter around.
“Since I have enough imagination and a lack of memory, your AGI would prevent me from expressing myself, so I think I would prefer our problems to him.”
Why would AGI stop you doing anything harmless?
“On the other hand, those who have a good memory would also get dismissed, because they could not support the competition, and by far. Have you heard about chess masters lately?”
There is nothing to stop people enjoying playing chess against each other—being wiped off the board by machines takes a little of the gloss off it, but that’s no worse than the world’s fastest runners being outdone by people on bicycles.
″ That AGI is your baby, so you want it to live,”
Live? Are calculators alive? It’s just software and a machine.
″...but have you thought about what would be happening to us if we suddenly had no problem to solve?”
What happens to us now? Abused minorities, environmental destruction, theft of resources, theft in general, child abuse, murder, war, genocide, etc. Without AGI in charge, all of that will just go on and on, and I don’t think any of that gives us a feeling of greater purpose. There will still be plenty of problems for us to solve though, because we all have to work out how best to spend our time, and there are too many options to cover everything that’s worth doing.