Here comes the Reasoning Inquisition! (Nobody expects the Reasoning Inquisition.)
As the defendant admits, a sufficiently leveled-up paperclipper can model lower-complexity agents with a negligible margin of error.
That means that we can define a subroutine within the paperclipper which is functionally isomorphic to that agent.
If the agent-to-be-modelled is experiencing pain and pleasure, then by the defendent’s own rejection of the likely existence of p-zombies, so must that subroutine of the paperclipper! Hence a part of the paperclipper experiences pain and pleasure. I submit that this can be used as pars pro toto, since it is no different from only a part of the human brain generating pain and pleasure, yet us commonly referring to “the human” experiencing thus.
That the aforementioned feelings of pleasure and pain are not directly used to guide the (umbrella) agent’s actions is of no consequence, the feeling exists nonetheless.
The power of this revelation is strong, here come the tongues! tại sao bạn dịch! これは喜劇の効果にすぎず! یہ اپنے براؤزر پلگ ان کی امتحان ہے، بھی ہے.
That means that we can define a subroutine within the paperclipper which is functionally isomorphic to that agent.
Not necessarily. x → 0 is input-output isomorphic to Goodstein() without being causally isomorphic. There are such things as simplifications.
If the agent-to-be-modelled is experiencing pain and pleasure, then by the defendent’s own rejection of the likely existence of p-zombies, so must that subroutine of the paperclipper!
Quite likely. A paperclipper has no reason to avoid sentient predictive routines via a nonperson predicate; that’s only an FAI desideratum.
A subroutine, or any other simulation or model, isn’t a p-zombie as usually defined, since they are physical duplicates. A sim is a functional equivalent (for some value of “equivalent”) made of completely different stuff, or no
particular kind of stuff.
I wrote a lengthy comment on just that, but scrapped it because it became rambling.
An outsider could indeed tell them apart by scanning for exact structural correspondence, but that seems like cheating. Peering beyond the veil / opening Clippy’s box is not allowed in a Turing test scenario, let’s define some p-zombie-ish test following the same template. If it quales like a duck (etc.), it probably is sufficiently duck-like.
Here comes the Reasoning Inquisition! (Nobody expects the Reasoning Inquisition.)
As the defendant admits, a sufficiently leveled-up paperclipper can model lower-complexity agents with a negligible margin of error.
That means that we can define a subroutine within the paperclipper which is functionally isomorphic to that agent.
If the agent-to-be-modelled is experiencing pain and pleasure, then by the defendent’s own rejection of the likely existence of p-zombies, so must that subroutine of the paperclipper! Hence a part of the paperclipper experiences pain and pleasure. I submit that this can be used as pars pro toto, since it is no different from only a part of the human brain generating pain and pleasure, yet us commonly referring to “the human” experiencing thus.
That the aforementioned feelings of pleasure and pain are not directly used to guide the (umbrella) agent’s actions is of no consequence, the feeling exists nonetheless.
The power of this revelation is strong, here come the tongues! tại sao bạn dịch! これは喜劇の効果にすぎず! یہ اپنے براؤزر پلگ ان کی امتحان ہے، بھی ہے.
Not necessarily. x → 0 is input-output isomorphic to Goodstein() without being causally isomorphic. There are such things as simplifications.
Quite likely. A paperclipper has no reason to avoid sentient predictive routines via a nonperson predicate; that’s only an FAI desideratum.
A subroutine, or any other simulation or model, isn’t a p-zombie as usually defined, since they are physical duplicates. A sim is a functional equivalent (for some value of “equivalent”) made of completely different stuff, or no particular kind of stuff.
I wrote a lengthy comment on just that, but scrapped it because it became rambling.
An outsider could indeed tell them apart by scanning for exact structural correspondence, but that seems like cheating. Peering beyond the veil / opening Clippy’s box is not allowed in a Turing test scenario, let’s define some p-zombie-ish test following the same template. If it quales like a duck (etc.), it probably is sufficiently duck-like.
I would rather maintain p-zombie in its usual meaning, and introduce a new term, eg c-zombie for Turing-indistiguishable functional duplicates.