A related factor is curiosity. As I understand, reinforcement learning agents perform much better if gifted with curiosity (or if developed it by themselves). Seeking novel information is extremely helpful for most goals (but could lead to “TV addiction”).
I find it plausible that ASI will be curious, and that both humanity and the biosphere, which are the results of billions of years of an enormous computation, will stimulate ASI’s curiosity.
But its curiosity may not last for centuries, or even years. Additionally, the curiosity may involve some dissection of living humans, or worse.
A related factor is curiosity. As I understand, reinforcement learning agents perform much better if gifted with curiosity (or if developed it by themselves). Seeking novel information is extremely helpful for most goals (but could lead to “TV addiction”).
I find it plausible that ASI will be curious, and that both humanity and the biosphere, which are the results of billions of years of an enormous computation, will stimulate ASI’s curiosity.
But its curiosity may not last for centuries, or even years. Additionally, the curiosity may involve some dissection of living humans, or worse.