Section of an interesting talk relating to this by Anna Salamon. Makes the point that if ability to improve its model of fundamental physics is not linear in the amount of Universe it controls, such an AI would be at least somewhat risk-averse (with respect to gambles that give it different proportions of our Universe)
Section of an interesting talk relating to this by Anna Salamon. Makes the point that if ability to improve its model of fundamental physics is not linear in the amount of Universe it controls, such an AI would be at least somewhat risk-averse (with respect to gambles that give it different proportions of our Universe)