Both the AIXI and the Machine Super Intelligence use cardinal utilities, or in the latter case rational-number approximations to cardinal utilities (not sure if economists have a separate label for that), for their reward functions. I suspect this limits their applicability to human and other organisms.
FWIW, I don’t see that as much of a problem. I’m more concerned about humans having a multitude of pain sensors (multiple reward channels), and a big mountain of a-priori knowledge about which actions are associated with which types of pain—though that doesn’t exactly break the utility-based models either.
But the whole concept of “rationality” can probably usefully be viewed as a memeplex. And rational reflection leading to its rejection, while not a priori impossible, seems unlikely.
Sure, but “rationality” and “values” are pretty orthogonal ideas. You can use rational thinking to pursue practically any set of values. I suppose if your values are crazy ones, a dose of rationality might have an effect.
Lately, though, the rate of evolution of the memes may be leaving the genes in the dust.
Yes indeed. That’s been going on since the stone age, and it has left its mark on human nature.
FWIW, I don’t see that as much of a problem. I’m more concerned about humans having a multitude of pain sensors (multiple reward channels), and a big mountain of a-priori knowledge about which actions are associated with which types of pain—though that doesn’t exactly break the utility-based models either.
Sure, but “rationality” and “values” are pretty orthogonal ideas. You can use rational thinking to pursue practically any set of values. I suppose if your values are crazy ones, a dose of rationality might have an effect.
Yes indeed. That’s been going on since the stone age, and it has left its mark on human nature.
Pretty much, but I think not totally. But we’ve gone far enough afield already. I’ll note this as a possible topic for a future discussion post.