Intelligence is poorly-defined, for a start, artificial intelligence doubly so—think about the number of times we’ve redefined “AI” after achieving what we previously called “AI”.
“Recursive self-improvement” is also poorly-defined; as an example, we have recursive self-improving AIs right now, in the form of self-training neural nets.
Superintelligence is even less well-defined, which is why I prefer the term “godhood”, which I regard as more honest in its vagueness. It may also be illusory; most of us on Less Wrong are here in part because of boredom, because intelligence isn’t nearly as applicable in daily life as we’d need it to be to stay entertained; does intelligence have diminishing returns?
We can tell that some people are smarter than other people, but we’re not even certain what that means, except that they do better by the measurement we measure them by.
Intelligence, Artificial Intelligence and Recursive Self-improvement are likely poorly defined. But since we can point to concrete examples of all three, this is a problem in the map, not the territory. These things exist, and different versions of them will exist in the future.
Superintelligences do not exist, and it is an open question if they ever will. Bostrom defines superintelligences as “an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills.” While this definition has a lot of fuzzy edges, it is conceivable that we could one day point to a specific intellect, and confidently say that it is superintelligent. I feel that this too is a problem in the map, not the territory.
I was wrong to assume that you meant superintelligence when you wrote godhood, and I hope that you will forgive me for sticking with “superintelligence” for now.
Intelligence is poorly-defined, for a start, artificial intelligence doubly so—think about the number of times we’ve redefined “AI” after achieving what we previously called “AI”.
“Recursive self-improvement” is also poorly-defined; as an example, we have recursive self-improving AIs right now, in the form of self-training neural nets.
Superintelligence is even less well-defined, which is why I prefer the term “godhood”, which I regard as more honest in its vagueness. It may also be illusory; most of us on Less Wrong are here in part because of boredom, because intelligence isn’t nearly as applicable in daily life as we’d need it to be to stay entertained; does intelligence have diminishing returns?
We can tell that some people are smarter than other people, but we’re not even certain what that means, except that they do better by the measurement we measure them by.
Intelligence, Artificial Intelligence and Recursive Self-improvement are likely poorly defined. But since we can point to concrete examples of all three, this is a problem in the map, not the territory. These things exist, and different versions of them will exist in the future.
Superintelligences do not exist, and it is an open question if they ever will. Bostrom defines superintelligences as “an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills.” While this definition has a lot of fuzzy edges, it is conceivable that we could one day point to a specific intellect, and confidently say that it is superintelligent. I feel that this too is a problem in the map, not the territory.
I was wrong to assume that you meant superintelligence when you wrote godhood, and I hope that you will forgive me for sticking with “superintelligence” for now.