There’s a very commonly accepted line of thought around here whereby any sufficiently good digital approximation of a human brain is that human, in a sort of metaphysical way anyhow, because it uses the same underlying algorithms which describe how that brain works in it’s model of the brain.
(It doesn’t make much sense to me, since it seems to conflate the mathematical model with the physical reality, but as it’s usually expressed as an ethical principle it isn’t really under any obligation to make sense.)
The important thing is that once you identify sufficiently good simulations as moral agents you end up twisting yourself into ethical knots about things like how powerful beings in the far future treat the NPCs in their equivalent to video games. For that, and other reasons I’m not going to get into here, it seems like a fairly maladaptive belief even if it were accurate.
There’s a very commonly accepted line of thought around here whereby any sufficiently good digital approximation of a human brain is that human, in a sort of metaphysical way anyhow, because it uses the same underlying algorithms which describe how that brain works in it’s model of the brain.
(It doesn’t make much sense to me, since it seems to conflate the mathematical model with the physical reality, but as it’s usually expressed as an ethical principle it isn’t really under any obligation to make sense.)
The important thing is that once you identify sufficiently good simulations as moral agents you end up twisting yourself into ethical knots about things like how powerful beings in the far future treat the NPCs in their equivalent to video games. For that, and other reasons I’m not going to get into here, it seems like a fairly maladaptive belief even if it were accurate.