Since we are looking to resolve a mainly empirical question – what systems of motivations could we actually code into a putative AI – this theoretical disagreement is highly problematic.
I’m not sure what you mean by “problematic,” and it seems unclear—are you just trying to say “useless” nicely? If so, I’d construct this sentence more positively—“we can settle the empirical question without needing to resolve the theoretical disagreement.”
to accumulate more power, to become more intelligence and to be able to cooperate with other agents
Should be “intelligent.”
, the rational agent will then attempt to maximise it, using the approaches in all cases
Thanks! Nice paper overall.
Minor nitpicks up to section 3.1:
Should be “criterion.”
I’m not sure what you mean by “problematic,” and it seems unclear—are you just trying to say “useless” nicely? If so, I’d construct this sentence more positively—“we can settle the empirical question without needing to resolve the theoretical disagreement.”
Should be “intelligent.”
Should be “same approaches,” I assume.
Thanks for that! I won’t correct them here, though—I’ll wait to see what the final published version is, and update then.