And the same applies to any realistic AI. AIs aren’t going to be rational or irrational in a binary sense, they are going to have certain computational resources which they apply to certain problem domains with certain levels of optimisation. They’ll be able to solve some relatively simple problems perfectly, but there is still a whole class of problems that cannot be brute forced, and every imperfect AI design will be imperfect in its own way—it will have its own heuristics and therefore it’s own biases. Which spells trouble for the general project of trying to predict ASI behaviour on the basis that they will be rational.
And the same applies to any realistic AI. AIs aren’t going to be rational or irrational in a binary sense, they are going to have certain computational resources which they apply to certain problem domains with certain levels of optimisation. They’ll be able to solve some relatively simple problems perfectly, but there is still a whole class of problems that cannot be brute forced, and every imperfect AI design will be imperfect in its own way—it will have its own heuristics and therefore it’s own biases. Which spells trouble for the general project of trying to predict ASI behaviour on the basis that they will be rational.