I may be missing your point. As you’ve written about before, things go haywire when the agent knows too much about its own decisions in advance. Hence hacks like “playing chicken with the universe”.
So, the agent can’t know too much about its own decisions in advance. But is this an example of indexical uncertainty? Or is it (as it seems to me) an example of a kind of logical uncertainty that an agent apparently needs to have? Apparently, an agent needs to be sufficiently uncertainty, or to have uncertainty of some particular kind, about the output of the algorithm that the agent is. But uncertainty about the output of an algorithm requires only logical uncertainty.
I may be missing your point. As you’ve written about before, things go haywire when the agent knows too much about its own decisions in advance. Hence hacks like “playing chicken with the universe”.
So, the agent can’t know too much about its own decisions in advance. But is this an example of indexical uncertainty? Or is it (as it seems to me) an example of a kind of logical uncertainty that an agent apparently needs to have? Apparently, an agent needs to be sufficiently uncertainty, or to have uncertainty of some particular kind, about the output of the algorithm that the agent is. But uncertainty about the output of an algorithm requires only logical uncertainty.