The agent could have in its map that it is a computer box subject to physical laws, but only above the level where information processing occurs. That is, it could know that its memory was in a RAM stick, which was subject to material breaking and melting, but not have any predictions about inspecting individual bits of that stick. It could know that that box was itself by TDT-type analysis. Unfortunately this model isn’t enough for hardware improvements; it would know that adding RAM was theoretically possible, but it wouldn’t know the information-theoretic implications of how its memory protocols would react to added RAM.
Now that I think about it, that’s kind of what humans do. Except, to get to the point where we can learn that we are a thing that can be damaged, we need parents and pain-mechanisms to keep us safe.
Possible model of semi-dualism:
The agent could have in its map that it is a computer box subject to physical laws, but only above the level where information processing occurs. That is, it could know that its memory was in a RAM stick, which was subject to material breaking and melting, but not have any predictions about inspecting individual bits of that stick. It could know that that box was itself by TDT-type analysis. Unfortunately this model isn’t enough for hardware improvements; it would know that adding RAM was theoretically possible, but it wouldn’t know the information-theoretic implications of how its memory protocols would react to added RAM.
Now that I think about it, that’s kind of what humans do. Except, to get to the point where we can learn that we are a thing that can be damaged, we need parents and pain-mechanisms to keep us safe.