I agree that raising general predictive ability would also tend to increase wisdom. I think my main point, which I probably didn’t sufficiently highlight, is that wisdom is bottlenecked on data (and also maybe seems to require more abstraction and abduction than other learning) moreso than other knowledge we tend to collect due to the underlying complexity of the thing we are trying to predict (human behavior)
If the super intelligent agent would lack data, he would realize this and then go collect some. The situation is only dangerous if the agent decides to take drastic action without evaluating his own accuracy. But if the agent is too stupid to evaluate his own accuracy, he’s probably too stupid to implement the drastic action in the first place. And if the agent is able to evaluate itself, but ignores the result, that’s more a problem of evil than a lack of wisdom.
I agree that raising general predictive ability would also tend to increase wisdom. I think my main point, which I probably didn’t sufficiently highlight, is that wisdom is bottlenecked on data (and also maybe seems to require more abstraction and abduction than other learning) moreso than other knowledge we tend to collect due to the underlying complexity of the thing we are trying to predict (human behavior)
If the super intelligent agent would lack data, he would realize this and then go collect some. The situation is only dangerous if the agent decides to take drastic action without evaluating his own accuracy. But if the agent is too stupid to evaluate his own accuracy, he’s probably too stupid to implement the drastic action in the first place. And if the agent is able to evaluate itself, but ignores the result, that’s more a problem of evil than a lack of wisdom.