...and the construction turns out not as interesting as I suspected. Something like this is very easy to carry out by replacing the agent A with another that can’t be understood in S, but is equivalent to A (according to a system stronger than S). As a tool for understanding decision problems, this is intended to solve the problem of parsing the world in terms of A, finding how it depends on A, finding where A is located in the world, but if we can find all instances of A in the world to perform such surgery on them, we’ve already solved the problem!
Perhaps A can decide to make itself incomprehensible to itself (to any given S, rather), thus performing the transformation without surgery, formalization of free will by an act of then-mysterious free will? This could still be done. But it’s not clear if this can be done “from the outside”, where we don’t have the power of making A transform to make the dependence of the world on its actions clear.
...and the construction turns out not as interesting as I suspected. Something like this is very easy to carry out by replacing the agent A with another that can’t be understood in S, but is equivalent to A (according to a system stronger than S). As a tool for understanding decision problems, this is intended to solve the problem of parsing the world in terms of A, finding how it depends on A, finding where A is located in the world, but if we can find all instances of A in the world to perform such surgery on them, we’ve already solved the problem!
Perhaps A can decide to make itself incomprehensible to itself (to any given S, rather), thus performing the transformation without surgery, formalization of free will by an act of then-mysterious free will? This could still be done. But it’s not clear if this can be done “from the outside”, where we don’t have the power of making A transform to make the dependence of the world on its actions clear.