“random utility-maximizer” is pretty ambiguous; if you imagine the space of all possible utility functions over action-observation histories and you imagine a uniform distribution over them (suppose they’re finite, so this is doable), then the answer is low.
Heh, looking at my comment it turns out I said roughly the same thing 3 years ago.
I have no idea why I responded ‘low’ to 2. Does anybody think that’s reasonable and fits in with what I wrote here, or did I just mean high?
“random utility-maximizer” is pretty ambiguous; if you imagine the space of all possible utility functions over action-observation histories and you imagine a uniform distribution over them (suppose they’re finite, so this is doable), then the answer is low.
Heh, looking at my comment it turns out I said roughly the same thing 3 years ago.