A sufficiently complex bridge law might say that the agent is actually a rock which, through some bizarre arbitrary encoding, encodes a computation[1]. Meanwhile the actual agent is somewhere else. Hopefully the agent has some adequate Occamian prior and he never assigns this hypothesis any relevance because of the high complexity of the encoding code.
In idea-space, though, there is a computation which is encoded by a rock using a complex arbitrary encoding, which, by virtue of having a weird prior, concludes that it actually is that rock, and to whom them breaking of the rock would mean death. We usually regard this computation as irrelevant for moral purposes—only computations corresponding to “natural” interpretations of physical phenomena count. But this distinction between natural and arbitrary criterions of interpretation seems, well, arbitrary.
We regard a person’s body as “executing” the computation that is a person that thinks she is in that body. But we do not regard the rock as actually “executing” the computation that is a weird agent that thinks it’s the computation encoded by the rock (through some arbitrary encoding).
Why?
The pragmatic reason is obvious: you can’t do anything to help the rock-computation in anyway, and whatever you do, you’d be lowering utility for some other ideal computation.
But maybe the kind of reasoning the rock-computation has to make to keep seeing itself as the rock is relevant.
A rock-as-computation hypothesis (I am this rock in this universe = my phenomenological experiences correspond to the states os atoms in several points of this rock as translated by this [insert very long bridge law, full of specifics] bridge law) is doomed to fail at the few next steps in the computation. Because the bridge law is so ad hoc, it won’t correctly predict the next phenomena perceived by the computation (in the ideal or real world where it actually executes). So if the rock-computation does induction at all, it will have to change the bridge law, and give up on being that rock.
In other words, if we built a robot with a prior that privileges the bridge law hypothesis that it’s a computation encoded in a rcok though some bizarre encoding, it would have to abandon that hypothesis very very soon. And, as phenomelogical data came in, it would approach the correct hypothesis that it’s the robot. Unless it’s doing some kind of privileged-hypothesis anti-induction where it keeps adopting increasingly complex bridge laws to keep believing it is that one rock.
So, proposal: a substrate is regarded to embody a computation-agent if that computation-agent is one that, by doing induction in the same sense we do to find out about ourselves, will eventually arrive at the correct bridge law that it is being executed in said substrate.
--
[1] Rock example is from DRESCHER, Gary, Good and Real, 2006, p. 55.
Brain dump of a quick idea:
A sufficiently complex bridge law might say that the agent is actually a rock which, through some bizarre arbitrary encoding, encodes a computation[1]. Meanwhile the actual agent is somewhere else. Hopefully the agent has some adequate Occamian prior and he never assigns this hypothesis any relevance because of the high complexity of the encoding code.
In idea-space, though, there is a computation which is encoded by a rock using a complex arbitrary encoding, which, by virtue of having a weird prior, concludes that it actually is that rock, and to whom them breaking of the rock would mean death. We usually regard this computation as irrelevant for moral purposes—only computations corresponding to “natural” interpretations of physical phenomena count. But this distinction between natural and arbitrary criterions of interpretation seems, well, arbitrary.
We regard a person’s body as “executing” the computation that is a person that thinks she is in that body. But we do not regard the rock as actually “executing” the computation that is a weird agent that thinks it’s the computation encoded by the rock (through some arbitrary encoding).
Why?
The pragmatic reason is obvious: you can’t do anything to help the rock-computation in anyway, and whatever you do, you’d be lowering utility for some other ideal computation.
But maybe the kind of reasoning the rock-computation has to make to keep seeing itself as the rock is relevant.
A rock-as-computation hypothesis (I am this rock in this universe = my phenomenological experiences correspond to the states os atoms in several points of this rock as translated by this [insert very long bridge law, full of specifics] bridge law) is doomed to fail at the few next steps in the computation. Because the bridge law is so ad hoc, it won’t correctly predict the next phenomena perceived by the computation (in the ideal or real world where it actually executes). So if the rock-computation does induction at all, it will have to change the bridge law, and give up on being that rock.
In other words, if we built a robot with a prior that privileges the bridge law hypothesis that it’s a computation encoded in a rcok though some bizarre encoding, it would have to abandon that hypothesis very very soon. And, as phenomelogical data came in, it would approach the correct hypothesis that it’s the robot. Unless it’s doing some kind of privileged-hypothesis anti-induction where it keeps adopting increasingly complex bridge laws to keep believing it is that one rock.
So, proposal: a substrate is regarded to embody a computation-agent if that computation-agent is one that, by doing induction in the same sense we do to find out about ourselves, will eventually arrive at the correct bridge law that it is being executed in said substrate.
--
[1] Rock example is from DRESCHER, Gary, Good and Real, 2006, p. 55.