if our values are threatened by super intelligence, does that mean that we should build the super intelligence with an ad hoc human value module, or that we should abandon our values?
Also, there are some human values which it seems likely to me are pretty universal to intelligence. If the ability to get bored is correlated with the ability to be creative (which I think it is), and super intelligences (whatever else they are) must be capable of creative action by virtue of their being super intelligences, then they’re likely to also care about diversity. In fact, I have a hard time imagining a conscious (as conscious as a human) being that is capable of repeating an action indefinitely without having that action be less rewarding. Another human value which i have thought might be inherit to intelligent objects is an appreciation for complicated things, e.g., works of music, mathematical structures, biological structures, the universe as told by sagan, etc. Assuming that an intelligence will always gain more pleasure from accomplishing a more difficult task than a less difficult task, we shouldn’t be surprised if singularities show a tendency to preserve complicated objects .
Now a self modifying super intelligence may well accidentally make itself unconscious, after which point I have no clue what values (if “values” even works) it might take up, but as long as it is creative and liking complicated things, we shouldn’t interfere with their doings.
I may just be biased in that i happen to really like both the human ability to get bored aka be creative, and the human tendency to like complicated things; these values likely have no special value-ness in my view, but I don’t see why we would call anything that wasn’t gratified by solving hard problems or capable of creativity “conscious”. But I also don’t see why we should consider something to be unintelligent simply because it is not conscious. Perhaps we should be doing work in making sure the singularity stays conscious, instead of doing work in making the singularity friendly.
if our values are threatened by super intelligence, does that mean that we should build the super intelligence with an ad hoc human value module, or that we should abandon our values?
Also, there are some human values which it seems likely to me are pretty universal to intelligence. If the ability to get bored is correlated with the ability to be creative (which I think it is), and super intelligences (whatever else they are) must be capable of creative action by virtue of their being super intelligences, then they’re likely to also care about diversity. In fact, I have a hard time imagining a conscious (as conscious as a human) being that is capable of repeating an action indefinitely without having that action be less rewarding. Another human value which i have thought might be inherit to intelligent objects is an appreciation for complicated things, e.g., works of music, mathematical structures, biological structures, the universe as told by sagan, etc. Assuming that an intelligence will always gain more pleasure from accomplishing a more difficult task than a less difficult task, we shouldn’t be surprised if singularities show a tendency to preserve complicated objects .
Now a self modifying super intelligence may well accidentally make itself unconscious, after which point I have no clue what values (if “values” even works) it might take up, but as long as it is creative and liking complicated things, we shouldn’t interfere with their doings.
I may just be biased in that i happen to really like both the human ability to get bored aka be creative, and the human tendency to like complicated things; these values likely have no special value-ness in my view, but I don’t see why we would call anything that wasn’t gratified by solving hard problems or capable of creativity “conscious”. But I also don’t see why we should consider something to be unintelligent simply because it is not conscious. Perhaps we should be doing work in making sure the singularity stays conscious, instead of doing work in making the singularity friendly.
(please give rebuttals community)