The agent might commit to “only updating on those things accepted by program p”, even when it still doesn’t have the complete infinite list of “exactly in which things does p update” (in fact, this is always the case, since we can’t hold an infinite list in our head). It will, at the time of committing, know that p updates on certain things, doesn’t update on others… and it is uncertain about exactly what it does in all other situations. But that’s okay, that’s what we do all the time: decide on an endorsed deliberation mechanism based on its structural properties, without yet being completely sure of what it does (otherwise, we wouldn’t need the deliberation). But it does advise against committing while being too ignorant.
First off, that S was supposed to be ¯S, sorry.
The agent might commit to “only updating on those things accepted by program p”, even when it still doesn’t have the complete infinite list of “exactly in which things does p update” (in fact, this is always the case, since we can’t hold an infinite list in our head). It will, at the time of committing, know that p updates on certain things, doesn’t update on others… and it is uncertain about exactly what it does in all other situations. But that’s okay, that’s what we do all the time: decide on an endorsed deliberation mechanism based on its structural properties, without yet being completely sure of what it does (otherwise, we wouldn’t need the deliberation). But it does advise against committing while being too ignorant.