“looking for reflective equilibria of your current inconsistent and unknowledgeable self; something along the lines of ‘What would you ask me to do if you knew what I know and thought as fast as I do?’”
We’re sufficiently more intelligent than monkeys to do that reasoning… so humanity’s goal (as the advanced intelligence created by monkeys a few million years ago for getting to the Singularity) should be to use all the knowledge gained to tile the universe with bananas and forests etc.
We don’t have the right to say, “if monkeys were more intelligent and consistent, they would think like us”: we’re also a random product of evolution, from the point of view of monkeys. (Tile the world with ugly concrete buildings? Uhhh...)
So I think that to preserve our humanity in the process we should be the ones who become gradually more and more intelligent (and decide what goals to follow next). Humans are complicated, so to simulate it in a Friendly AI, we’d need comparably complex systems… and they are probably chaotic, too. Isn’t it… simply… impossible? (Not in a sense that “we can’t make it”, but “we can prove nobody can”...)
“looking for reflective equilibria of your current inconsistent and unknowledgeable self; something along the lines of ‘What would you ask me to do if you knew what I know and thought as fast as I do?’”
We’re sufficiently more intelligent than monkeys to do that reasoning… so humanity’s goal (as the advanced intelligence created by monkeys a few million years ago for getting to the Singularity) should be to use all the knowledge gained to tile the universe with bananas and forests etc.
We don’t have the right to say, “if monkeys were more intelligent and consistent, they would think like us”: we’re also a random product of evolution, from the point of view of monkeys. (Tile the world with ugly concrete buildings? Uhhh...)
So I think that to preserve our humanity in the process we should be the ones who become gradually more and more intelligent (and decide what goals to follow next). Humans are complicated, so to simulate it in a Friendly AI, we’d need comparably complex systems… and they are probably chaotic, too. Isn’t it… simply… impossible? (Not in a sense that “we can’t make it”, but “we can prove nobody can”...)