Your last paragraph is really interesting and not something I’d thought much about before. In practice is it likely to be unbounded? In a typical computer system aren’t number formats typically bounded, and if so would we expect an AI system to be using bounded numbers even if the programmers forgot to explicitly bound the reward in the code?
But aren’t we explicitly talking about the AI changing it’s architecture to get more reward? So if it wants to optimize that number the most important thing to do would be to get rid of that arbitrary limit.
Yeah that’s what I’d like to know, would an AI built on a number format that has a default maximum pursue numbers higher than that maximum, or would it be “fulfilled” just by getting its reward number as high as the number format its using allows?
Your last paragraph is really interesting and not something I’d thought much about before. In practice is it likely to be unbounded? In a typical computer system aren’t number formats typically bounded, and if so would we expect an AI system to be using bounded numbers even if the programmers forgot to explicitly bound the reward in the code?
But aren’t we explicitly talking about the AI changing it’s architecture to get more reward? So if it wants to optimize that number the most important thing to do would be to get rid of that arbitrary limit.
Yeah that’s what I’d like to know, would an AI built on a number format that has a default maximum pursue numbers higher than that maximum, or would it be “fulfilled” just by getting its reward number as high as the number format its using allows?
To me, this seems highly dependent on the ontology.