Optimising for one thing is pretty automatically optimising against other things,
Yes.
but AI theorists need a concept of general intelligence—optimising accross the board.
No. There is no optimizing across the board, as you just stated. Optimizing by one standard is optimizing against the inverse of that standard.
But AIXI can optimize by any standard we please, by setting its reward function to reflect that standard. That’s what we mean by an AI being “general” instead of “domain specific” (or “applied”, “narrow”, or “weak” AI). It can learn to optimize any goal. I would be willing to call an AI “general” if it’s at least as general as humans are.
Yes.
No. There is no optimizing across the board, as you just stated. Optimizing by one standard is optimizing against the inverse of that standard.
But AIXI can optimize by any standard we please, by setting its reward function to reflect that standard. That’s what we mean by an AI being “general” instead of “domain specific” (or “applied”, “narrow”, or “weak” AI). It can learn to optimize any goal. I would be willing to call an AI “general” if it’s at least as general as humans are.