Any mind that I can model sufficiently well to be accurate ceases to be an agent at that point.
If I can predict what you are going to do with 100% certainty, then it doesn’t matter what internal processes lead you to take that action. I don’t need to see into the black box to predict the action of the machine.
People I know well maintain their agenthood by virtue of the fact that they are sufficiently complex to think in ways I do not.
For these reasons, I rarely attempt to model the mental processes of minds I consider to be stronger than mine (in the rational sense.) Attempting to ask myself what a powerful rationalist would do is not a useful heuristic, as my model of a strong rationalist is not, in itself, stronger than my own understanding of rationalism.
Any mind that I can model sufficiently well to be accurate ceases to be an agent at that point.
If I can predict what you are going to do with 100% certainty, then it doesn’t matter what internal processes lead you to take that action. I don’t need to see into the black box to predict the action of the machine.
People I know well maintain their agenthood by virtue of the fact that they are sufficiently complex to think in ways I do not.
For these reasons, I rarely attempt to model the mental processes of minds I consider to be stronger than mine (in the rational sense.) Attempting to ask myself what a powerful rationalist would do is not a useful heuristic, as my model of a strong rationalist is not, in itself, stronger than my own understanding of rationalism.