The original point was ‘There are limits to how much an agent can say about its physical state at a given time’. You’re saying ‘There aren’t limits to how much an agent can find out about its physical state over time’. That’s right. An agent may be able to internally access anything about itself — have it ready at hand, be able to read off the state of any particular small component of itself at a moment’s notice — even if it can’t internally represent everything about itself at a given time.
There could, perhaps, be a fixed point of ‘represent’ by which an agent could ‘represent’ everything about itself, including the representation, for most reasonable forms of ‘representiness’ including cognitive post-processing. (We do a lot of fixed-pointing at MIRI decision theory workshops.) But a bounded agent shouldn’t be bothering, and it won’t include the low-level quark states either.
The original point was ‘There are limits to how much an agent can say about its physical state at a given time’. You’re saying ‘There aren’t limits to how much an agent can find out about its physical state over time’. That’s right. An agent may be able to internally access anything about itself — have it ready at hand, be able to read off the state of any particular small component of itself at a moment’s notice — even if it can’t internally represent everything about itself at a given time.
There could, perhaps, be a fixed point of ‘represent’ by which an agent could ‘represent’ everything about itself, including the representation, for most reasonable forms of ‘representiness’ including cognitive post-processing. (We do a lot of fixed-pointing at MIRI decision theory workshops.) But a bounded agent shouldn’t be bothering, and it won’t include the low-level quark states either.