Your division of predictive ability into intelligence and wisdom is very artificial. People are not magic, they’re just chaotic. They are not fundamentally different from other complex and chaotic systems. There is no reason to expect that raising general predictive ability wouldn’t help predicting them.
I agree that raising general predictive ability would also tend to increase wisdom. I think my main point, which I probably didn’t sufficiently highlight, is that wisdom is bottlenecked on data (and also maybe seems to require more abstraction and abduction than other learning) moreso than other knowledge we tend to collect due to the underlying complexity of the thing we are trying to predict (human behavior)
If the super intelligent agent would lack data, he would realize this and then go collect some. The situation is only dangerous if the agent decides to take drastic action without evaluating his own accuracy. But if the agent is too stupid to evaluate his own accuracy, he’s probably too stupid to implement the drastic action in the first place. And if the agent is able to evaluate itself, but ignores the result, that’s more a problem of evil than a lack of wisdom.
Your division of predictive ability into intelligence and wisdom is very artificial. People are not magic, they’re just chaotic. They are not fundamentally different from other complex and chaotic systems. There is no reason to expect that raising general predictive ability wouldn’t help predicting them.
I agree that raising general predictive ability would also tend to increase wisdom. I think my main point, which I probably didn’t sufficiently highlight, is that wisdom is bottlenecked on data (and also maybe seems to require more abstraction and abduction than other learning) moreso than other knowledge we tend to collect due to the underlying complexity of the thing we are trying to predict (human behavior)
If the super intelligent agent would lack data, he would realize this and then go collect some. The situation is only dangerous if the agent decides to take drastic action without evaluating his own accuracy. But if the agent is too stupid to evaluate his own accuracy, he’s probably too stupid to implement the drastic action in the first place. And if the agent is able to evaluate itself, but ignores the result, that’s more a problem of evil than a lack of wisdom.